Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Problems with HUGE XML files

I have 16 large xml files. When I say Large, I am talking in gigabytes. One of these files is over 8 GB. Several of them are over 1 gb. These are given to me from an external provider.

I am trying to import the XML into a database so that I can shred it into tables. Currently, I stream 10,000 records at a time out of the file into memory and insert the blob. I use SSIS with a script task to do this. This is actually VERY fast for all files, except the 8 GB file.

I cannot load the entire file into an xml document. I can't stress this enough. That was iteration 1 and the files are so huge that the system just locks up trying to deal with these files, the 8 gb one in particular.

I ran my current "file splitter" and it spent 7 hours on importing the xml data and still was not done. It imported 363 blocks of 10,000 records out of the 8 GB file and was still not done.

FYI, here is how I am currently streaming my files into memory (10,000 records at a time). I found the code at http://blogs.msdn.com/b/xmlteam/archive/2007/03/24/streaming-with-linq-to-xml-part-2.aspx

private static IEnumerable<XElement> SimpleStreamAxis(string fileName, string matchName) 
        {
            using (FileStream stream = File.OpenRead(fileName))
            {
                using (XmlReader reader = XmlReader.Create(stream, new XmlReaderSettings() { ProhibitDtd = false }))
                {
                    reader.MoveToContent();
                    while (reader.Read())
                    {
                        switch (reader.NodeType)
                        {
                            case XmlNodeType.Element:
                                if (reader.Name == matchName)
                                {
                                    XElement el = XElement.ReadFrom(reader) as XElement;
                                    if (el != null)
                                        yield return el;
                                }
                                break;
                        }
                    }

                    reader.Close();
                }

                stream.Close();
            }
        }

So, it works fine on all the files, except the 8 GB one where as it has to stream further and further into the file it takes longer and longer.

What I would like to do is split the file into smaller chunks, but the splitter needs to be fast. Then the streamer and the rest of the process can run more quickly. What is the best way to go about splitting the files? Ideally I'd split it myself in code in SSIS.

EDIT:

Here's the code that actually pages out my data using the streaming methodology.

connection = (SqlConnection)cm.AcquireConnection(null);

                int maximumCount = Convert.ToInt32(Dts.Variables["MaximumProductsPerFile"].Value);
                int minMBSize = Convert.ToInt32(Dts.Variables["MinimumMBSize"].Value);
                int maxMBSize = Convert.ToInt32(Dts.Variables["MaximumMBSize"].Value);

                string fileName = Dts.Variables["XmlFileName"].Value.ToString();

                FileInfo info = new FileInfo(fileName);

                long fileMBSize = info.Length / 1048576; //1024 * 1024 bytes in a MB

                if (minMBSize <= fileMBSize && maxMBSize >= fileMBSize)
                {
                    int pageSize = 10000;     //do 2000 products at one time

                    if (maximumCount != 0)
                        pageSize = maximumCount;

                    var page = (from p in SimpleStreamAxis(fileName, "product") select p).Take(pageSize);
                    int current = 0;

                    while (page.Count() > 0)
                    {
                        XElement xml = new XElement("catalog",
                            from p in page
                            select p);

                        SubmitXml(connection, fileName, xml.ToString());

                        //if the maximum count is set, only load the maximum (in one page)
                        if (maximumCount != 0)
                            break;

                        current++;
                        page = (from p in SimpleStreamAxis(fileName, "product") select p).Skip(current * pageSize).Take(pageSize);
                    }
                }
like image 302
Josh Avatar asked Aug 06 '10 17:08

Josh


1 Answers

It looks like you are re-reading into the XML file over and over again each step, each time you use the from p in SimpleStreamAxis bit you are re-reading and scanning into the file. Also by calling Count() you are walking the full page each time.

Try something like this:

var full = (from p in SimpleStreamAxis(fileName, "product") select p);
int current = 0;

while (full.Any() > 0)
{
    var page = full.Take(pageSize);

    XElement xml = new XElement("catalog",
    from p in page
    select p);

    SubmitXml(connection, fileName, xml.ToString());

    //if the maximum count is set, only load the maximum (in one page)
    if (maximumCount != 0)
        break;

    current++;
    full = full.Skip(pageSize);
}

Note this is untested, but you should hopefully get the idea. You need to avoid enumerating through the file more than once, operations like Count() and Take/Skip are going to take a long time on an 8gb xml file.

Update: I think the above will still iterate through the file more times than we want, you need something a bit more predictable like this:

var full = (from p in SimpleStreamAxis(fileName, "product") select p);
int current = 0;

XElement xml = new XElement("catalog");
int pageIndex = 0;

foreach (var element in full)
{
    xml.Add(element);

    pageIndex++;
    if (pageIndex == pageSize)
    {
        SubmitXml(connection, fileName, xml.ToString());
        xml = new XElement("catalog");
        pageIndex = 0;
    }

    //if the maximum count is set, only load the maximum (in one page)
    if (maximumCount != 0)
        break;

    current++;
}

    // Submit the remainder
if (xml.Elements().Any())
{
    SubmitXml(connection, fileName, xml.ToString());
}
like image 113
Simon Steele Avatar answered Oct 24 '22 01:10

Simon Steele