Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python sax to lxml for 80+GB XML

Tags:

python

lxml

sax

How would you read an XML file using sax and convert it to a lxml etree.iterparse element?

To provide an overview of the problem, I have built an XML ingestion tool using lxml for an XML feed that will range in the size of 25 - 500MB that needs ingestion on a bi-daily basis, but needs to perform a one time ingestion of a file that is 60 - 100GB's.

I had chosen to use lxml based on the specifications that detailed a node would not exceed 4 -8 GB's in size which I thought would allow the node to be read into memory and cleared when finished.

An overview if the code is below

elements = etree.iterparse(
    self._source, events = ('end',)
)
for event, element in elements:
    finished = True
    if element.tag == 'Artist-Types':
        self.artist_types(element)

def artist_types(self, element):
    """
    Imports artist types

    :param list element: etree.Element
    :returns boolean:
    """
    self._log.info("Importing Artist types")
    count = 0
    for child in element:
        failed = False
        fields = self._getElementFields(child, (
            ('id', 'Id'),
            ('type_code', 'Type-Code'),
            ('created_date', 'Created-Date')
        ))
        if self._type is IMPORT_INC and has_artist_type(fields['id']):
            if update_artist_type(fields['id'], fields['type_code']):
                count = count + 1
            else:
                failed = True
        else:
            if create_artist_type(fields['type_code'],
                fields['created_date'], fields['id']):
                count = count + 1
            else:
                failed = True
        if failed:
            self._log.error("Failed to import artist type %s %s" %
                (fields['id'], fields['type_code'])
            )
    self._log.info("Imported %d Artist Types Records" % count)
    self._artist_type_count = count
    self._cleanup(element)
    del element

Let me know if I can add any type of clarification.

like image 584
Nick Avatar asked Mar 21 '12 17:03

Nick


People also ask

What is lxml package in Python?

lxml is a Python library which allows for easy handling of XML and HTML files, and can also be used for web scraping. There are a lot of off-the-shelf XML parsers out there, but for better results, developers sometimes prefer to write their own XML and HTML parsers.

Is lxml included in Python?

lxml has been downloaded from the Python Package Index millions of times and is also available directly in many package distributions, e.g. for Linux or macOS.

What is sax in Python?

sax package provides a number of modules which implement the Simple API for XML (SAX) interface for Python. The package itself provides the SAX exceptions and the convenience functions which will be most used by users of the SAX API. The xml. sax module is not secure against maliciously constructed data.


1 Answers

iterparse is an iterative parser. It will emit Element objects and events and incrementally build the entire Element tree as it parses, so eventually it will have the whole tree in memory.

However, it is easy to have a bounded memory behavior: delete elements you don't need anymore as you parse them.

The typical "giant xml" workload is a single root element with a large number of child elements which represent records. I assume this is the kind of XML structure you are working with?

Usually it is enough to use clear() to empty out the element you are processing. Your memory usage will grow a little but it's not very much. If you have a really huge file, then even the empty Element objects will consume too much and in this case you must also delete previously-seen Element objects. Note that you cannot safely delete the current element. The lxml.etree.iterparse documentation describes this technique.

In this case, you will process a record every time a </record> is found, then you will delete all previous record elements.

Below is an example using an infinitely-long XML document. It will print the process's memory usage as it parses. Note that the memory usage is stable and does not continue growing.

from lxml import etree
import resource

class InfiniteXML(object):

    def __init__(self):
        self._root = True

    def read(self, len=None):
        if self._root:
            self._root = False
            return "<?xml version='1.0' encoding='US-ASCII'?><records>\n"
        else:
            return """<record>\n\t<ancestor attribute="value">text value</ancestor>\n</record>\n"""

def parse(fp):
    context = etree.iterparse(fp, events=('end',))
    for action, elem in context:
        if elem.tag == 'record':
            # processing goes here
            pass
        
        # memory usage
        print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
        
        # cleanup
        # first empty children from current element
            # This is not absolutely necessary if you are also deleting siblings,
            # but it will allow you to free memory earlier.
        elem.clear()
        # second, delete previous siblings (records)
        while elem.getprevious() is not None:
            del elem.getparent()[0]
        # make sure you have no references to Element objects outside the loop

parse(InfiniteXML())
like image 57
Francis Avila Avatar answered Oct 10 '22 17:10

Francis Avila