Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Error using spark 'save' does not support bucketing right now

I have a DataFrame which I am trying to partitionBy a column, sort it by that column and save in parquet format using the following command:

df.write().format("parquet")
  .partitionBy("dynamic_col")
  .sortBy("dynamic_col")
  .save("test.parquet");

I get the following error:

reason: User class threw exception: org.apache.spark.sql.AnalysisException: 'save' does not support bucketing right now;

Is save(...) not allowed? Is only saveAsTable(...) allowed which saves the data to Hive?

Any suggestions are helpful.

like image 788
Kans Avatar asked Oct 14 '18 02:10

Kans


1 Answers

The problem is that sortBy is currently (Spark 2.3.1) supported only together with bucketing and bucketing needs to be used in combination with saveAsTable and also the bucket sorting column should not be part of partition columns.

So you have two options:

  1. Do not use sortBy:

    df.write
    .format("parquet")
    .partitionBy("dynamic_col")
    .option("path", output_path)
    .save()
    
  2. Use sortBy with bucketing and save it through the metastore using saveAsTable:

    df.write
    .format("parquet")
    .partitionBy("dynamic_col")
    .bucketBy(n, bucket_col)
    .sortBy(bucket_col)
    .option("path", output_path)
    .saveAsTable(table_name)
    
like image 160
David Vrba Avatar answered Oct 23 '22 21:10

David Vrba