The current setup uses XmlDocument
and XPathDocument
(depending on when it was written and by whom). The data is looked up when first requested and cached in an internal data structure (rather than as XML, which would take up more memory in most scenarios). In the past, this was a nice model as it had fast access times and low memory footprint (or at least, satisfactory memory footprint). Now, however, there is a feature that queries a large proportion of the information in one go, rather than the nicely spread out requests we previously had. This causes the XML loading, validation, and parsing to be a visible bottleneck in performance.
Note that the data itself can be in memory, just not in its more bloated XML form if we can help it. In the worst case, we could accept a single file being loaded into memory to be parsed and then unloaded again to free resources, but I'd like to avoid that if at all possible.
Considering that we're already caching data where we can, this question could also be read as "which is faster and uses less memory; XmlDocument
, XPathDocument
, parsing based on XmlReader
, or XDocument
/LINQ-to-XML?"
Edit: Even simpler, can we randomly access the XML on disk without reading in the entire file at once?
<MyXml>
<Record id='1'/>
<Record id='2'/>
<Record id='3'/>
</MyXml>
Our user interface wants to know if a record exists with an id of 3. We want to find out without having to parse and load every record in the file, if we can. So, if it is in our cache, there's no XML interaction, if it isn't, we can just load that record into the cache and respond to the request.
I realize that there may well be a blog or MSDN article on this somewhere and I will be continuing to Google after I've posted this question, but if anyone has some data that might help, or some examples of when one approach is better or faster than another, that would be great.
Update
The XMLTeam published a blog today that gives great advice on when to use the various XML APIs in .NET. It looks like something based on XmlReader
and IEnumerable
would be my best option for the scenario I gave here.
With XML I only know of two ways
XMLReader -> stream the large XML data in or use the XML DOM object model and read the entire XML in at once into memory.
If the XML is big, we have XML files in 80 MB range and up, reading the XML into memory is a performance hit. There is no real way to "merge" the two ways of dealing with XML documents. Sorry.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With