Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to avoid O(n^2) complexity when grouping records in XSLT?

I'm frequently running into performance issues when I XSL transform large amounts of data into HTML. This data is usually just a couple of very large tables of roughly this form:

<table>
  <record>
    <group>1</group>
    <data>abc</abc>
  </record>
  <record>
    <group>1</group>
    <data>def</abc>
  </record>
  <record>
    <group>2</group>
    <data>ghi</abc>
  </record>
</table>

During transformation, I want to visually group the records like this

+--------------+
| Group 1      |
+--------------+
|   abc        |
|   def        |
+--------------+
| Group 2      |
+--------------+
|   ghi        |
+--------------+

A silly implementation is this one (set is from http://exslt.org. the actual implementation is a bit different, this is just an example):

<xsl:for-each select="set:distinct(/table/record/group)">
  <xsl:variable name="group" select="."/>

  <!-- This access needs to be made faster : -->
  <xsl:for-each select="/table/record[group = $group]">
    <!-- Do the table stuff -->
  </xsl:for-each>
</xsl:for-each>

It's easy to see that this tends to have O(n^2) complexity. Even worse, as there are lots of fields in every record. The data operated on can reach several dozens of MB, the number of records can go up to 5000. In the worst case, every record has its own group and 50 fields. And to make things even much worse, there is yet another level of grouping possible, making this O(n^3)

Now there would be quite a few options:

  1. I could find a Java solution to this involving maps and nested data structures. But I want to improve my XSLT skills, so that's actually the last option.
  2. I'm maybe oblivious of a nice feature in Xerces/Xalan/Exslt, that can handle grouping much better
  3. I can maybe build an index of some sort for /table/record/group
  4. You can prove to me that the <xsl:apply-templates/> approach is decidedly faster in this use-case than the <xsl:for-each/> approach.

What do you think how this O(n^2) complexity can be reduced?

like image 722
Lukas Eder Avatar asked Nov 10 '11 09:11

Lukas Eder


2 Answers

You can just use the wellknown Muenchian grouping method in XSLT 1.0 -- no need to explore sorted data and implement more complicated and slower algorithms:

<xsl:stylesheet version="1.0"
 xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
 <xsl:output omit-xml-declaration="yes" indent="yes"/>
 <xsl:strip-space elements="*"/>

 <xsl:key name="kGroupByVal" match="group" use="."/>

 <xsl:template match="node()|@*">
     <xsl:copy>
       <xsl:apply-templates select="node()|@*"/>
     </xsl:copy>
 </xsl:template>

 <xsl:template match=
  "group
      [generate-id()
      =
       generate-id(key('kGroupByVal', .)[1])
      ]">
  <group gid="{.}">
   <xsl:apply-templates select="key('kGroupByVal', .)/node()"/>
  </group>
 </xsl:template>
 <xsl:template match="group/text()"/>
</xsl:stylesheet>

When this transformation is applied on your provided text (that isn't even a well-formed XML document!!!) after correcting it to well-formedness,

it takes 80ms for 3 record elements.

With similar text having 1000 record elements the transformation finishes in 136ms.

With 10000 record elements the time taken is 284ms.

With 100000 record elements the time taken is 1667ms.

The observed complexity is clearly sublinear.

It would be very difficult (if possible at all) to find a more efficient solution than Muenchian grouping in XSLT 1.0.

like image 184
Dimitre Novatchev Avatar answered Oct 10 '22 04:10

Dimitre Novatchev


If the data is presorted by groups (as in your example), you can loop the record set and check if the group of the record is different from the preceding record group. If the group changes, you can add a group header. This will perform in O(n) time complexity.

like image 26
Ivan Dugic Avatar answered Oct 10 '22 05:10

Ivan Dugic