I'm frequently running into performance issues when I XSL transform large amounts of data into HTML. This data is usually just a couple of very large tables of roughly this form:
<table>
<record>
<group>1</group>
<data>abc</abc>
</record>
<record>
<group>1</group>
<data>def</abc>
</record>
<record>
<group>2</group>
<data>ghi</abc>
</record>
</table>
During transformation, I want to visually group the records like this
+--------------+
| Group 1 |
+--------------+
| abc |
| def |
+--------------+
| Group 2 |
+--------------+
| ghi |
+--------------+
A silly implementation is this one (set is from http://exslt.org. the actual implementation is a bit different, this is just an example):
<xsl:for-each select="set:distinct(/table/record/group)">
<xsl:variable name="group" select="."/>
<!-- This access needs to be made faster : -->
<xsl:for-each select="/table/record[group = $group]">
<!-- Do the table stuff -->
</xsl:for-each>
</xsl:for-each>
It's easy to see that this tends to have O(n^2)
complexity. Even worse, as there are lots of fields in every record. The data operated on can reach several dozens of MB, the number of records can go up to 5000. In the worst case, every record has its own group and 50 fields. And to make things even much worse, there is yet another level of grouping possible, making this O(n^3)
Now there would be quite a few options:
/table/record/group
<xsl:apply-templates/>
approach is decidedly faster in this use-case than the <xsl:for-each/>
approach.What do you think how this O(n^2)
complexity can be reduced?
You can just use the wellknown Muenchian grouping method in XSLT 1.0 -- no need to explore sorted data and implement more complicated and slower algorithms:
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:strip-space elements="*"/>
<xsl:key name="kGroupByVal" match="group" use="."/>
<xsl:template match="node()|@*">
<xsl:copy>
<xsl:apply-templates select="node()|@*"/>
</xsl:copy>
</xsl:template>
<xsl:template match=
"group
[generate-id()
=
generate-id(key('kGroupByVal', .)[1])
]">
<group gid="{.}">
<xsl:apply-templates select="key('kGroupByVal', .)/node()"/>
</group>
</xsl:template>
<xsl:template match="group/text()"/>
</xsl:stylesheet>
When this transformation is applied on your provided text (that isn't even a well-formed XML document!!!) after correcting it to well-formedness,
it takes 80ms for 3 record
elements.
With similar text having 1000 record
elements the transformation finishes in 136ms.
With 10000 record
elements the time taken is 284ms.
With 100000 record
elements the time taken is 1667ms.
The observed complexity is clearly sublinear.
It would be very difficult (if possible at all) to find a more efficient solution than Muenchian grouping in XSLT 1.0.
If the data is presorted by groups (as in your example), you can loop the record set and check if the group of the record is different from the preceding record group. If the group changes, you can add a group header. This will perform in O(n) time complexity.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With