How do I use the storage partitioned join feature in Spark 3.3.0? I've tried it out, and my query plan still shows the expensive ColumnarToRow and Exchange steps. My setup is as follows:
hours(ts), bucket(20, id)
a.id = b.id AND a.ts = b.ts
and on a.id = b.id
org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:0.14.1
spark.sql.sources.v2.bucketing.enabled=true
I read through all the docs I could find on the storage partitioned join feature:
I'm wondering if there are other things I need to configure, if there needs to be something implemented in Iceberg still, or if I've set up something wrong. I'm super excited about this feature. It could really speed up some of our large joins.
Support for storage-partitioned joins (SPJ) has been added to Iceberg in PR #6371 and released in Apache Iceberg 1.2.0 on March 20th, 2023.
Spark added support for SPJ for v2 sources only in 3.3, so earlier versions can't benefit from this feature.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With