Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

XmlDocument.Load fails, LoadXml works:

In answering this question, I came across a situation that I don't understand. The OP was trying to load XML from the following location: http://www.google.com/ig/api?weather=12414&hl=it

The obvious solution is:

string m_strFilePath = "http://www.google.com/ig/api?weather=12414&hl=it";
XmlDocument myXmlDocument = new XmlDocument();
myXmlDocument.Load(m_strFilePath); //Load NOT LoadXml

However this fails with

XmlException : Invalid character in the given encoding. Line 1, position 499.

It seems to be choking on the à of Umidità.

OTOH, the following works fine:

var m_strFilePath = "http://www.google.com/ig/api?weather=12414&hl=it";
string xmlStr;
using(var wc = new WebClient())
{
    xmlStr = wc.DownloadString(m_strFilePath);
}
var xmlDoc = new XmlDocument();
xmlDoc.LoadXml(xmlStr);

I'm baffled by this. Can anyone explain why the former fails, but the latter works fine?

Notably, the xml declaration of the document omits an encoding.

like image 657
spender Avatar asked Sep 21 '11 09:09

spender


People also ask

What does LoadXml do?

LoadXml is used to load the XML contained within a string. They're fundamentally different ways of loading XML, depending on where the XML is actually stored.

Is XmlDocument disposable?

XmlDocument can't be disposed because it does not implement IDisposable.

What is XmlDocument?

An XML document is a basic unit of XML information composed of elements and other markup in an orderly package. An XML document can contains wide variety of data. For example, database of numbers, numbers representing molecular structure or a mathematical equation.


1 Answers

The WebClient uses the encoding information in the headers of the HTTP response to determine the correct encoding (in this case ISO-8859-1 which is ASCII based, i.e. 8 bits per character)

It looks like XmlDocument.Load doesn't use this information and as the encoding is also missing from the xml declaration it has to guess at an encoding and gets it wrong. Some digging around leads me to believe that it chooses UTF-8.

If we want to get really technical the character it throws up on is "à", which is 0xE0 in the ISO-8859-1 encoding, but this isn't a valid character in UTF-8 - specifically the binary representation of this character is:

11100000

If you have a dig around in the UTF-8 Wikipedia article we can see that this indicates a code point (i.e. character) consisting of a total of 3 bytes that take the following format:

Byte 1      Byte 2      Byte 3
----------- ----------- -----------
1110xxxx    10xxxxxx    10xxxxxx

But if we have a look back at the document the next two characters are ": " which is 0x3A and 0x20 in ISO-8859-1. This means what we actually end up with is:

Byte 1      Byte 2      Byte 3
----------- ----------- -----------
11100000    00111010    00100000

Neither the 2nd or 3rd bytes of the sequence have 10 as the two most significant bits (which would indicate a continuation), and so this character makes no sense in UTF-8.

like image 174
Justin Avatar answered Nov 02 '22 22:11

Justin