Nice, I wasn't aware of that. But it doesn't change the argument much: the XML document still only contains text data, and it's the schema validation phase that's responsible for converting the data into the correct format. Validating an XML document is an optional step, and I'm not aware of many tools that use XML as their config that perform full schema validation.
To add, the XML specification on decimal data types [0] explicitly says: Precision is not reflected in this value space; the number 2.0 is not distinct from the number 2.00 -- so a decimal data type in an XML document would have the exact same problem as the YAML example in TFA; the only difference is that with XML, the authors of the tool would have to actively shoot themselves in the foot by annotating that element as a decimal type rather than text.
The problem isn't that XML's decimal type doesn't distinguish between 1.2 and 1.20 - it's that versions aren't decimals in the first place.
The fact that versions often contain numbers separated by decimal points or that often times, versions only have two components or that minor versions may rarely exceed 9 for a particular product are merely coincidences.
My full sentence is more like saying Java is untyped if it were possible to run Java source files from the AST while skipping the type validation step, which seems pretty much a truism to me.
To add, the XML specification on decimal data types [0] explicitly says: Precision is not reflected in this value space; the number 2.0 is not distinct from the number 2.00 -- so a decimal data type in an XML document would have the exact same problem as the YAML example in TFA; the only difference is that with XML, the authors of the tool would have to actively shoot themselves in the foot by annotating that element as a decimal type rather than text.
[0] https://www.w3.org/TR/xmlschema11-2/#decimal