I have a DataFrame
which I am trying to partitionBy
a column, sort it by that column and save in parquet format using the following command:
df.write().format("parquet")
.partitionBy("dynamic_col")
.sortBy("dynamic_col")
.save("test.parquet");
I get the following error:
reason: User class threw exception: org.apache.spark.sql.AnalysisException: 'save' does not support bucketing right now;
Is save(...)
not allowed?
Is only saveAsTable(...)
allowed which saves the data to Hive?
Any suggestions are helpful.
The problem is that sortBy
is currently (Spark 2.3.1) supported only together with bucketing and bucketing needs to be used in combination with saveAsTable
and also the bucket sorting column should not be part of partition columns.
So you have two options:
Do not use sortBy
:
df.write
.format("parquet")
.partitionBy("dynamic_col")
.option("path", output_path)
.save()
Use sortBy
with bucketing and save it through the metastore using saveAsTable
:
df.write
.format("parquet")
.partitionBy("dynamic_col")
.bucketBy(n, bucket_col)
.sortBy(bucket_col)
.option("path", output_path)
.saveAsTable(table_name)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With