Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

spark df.write.partitionBy run very slow

I have a data frame that when saved as Parquet format takes ~11GB. When reading to a dataframe and writing to json, it takes 5 minutes. When I add partitionBy("day") it takes hours to finish. I understand that the distribution to partitions is the costly action. Is there a way to make it faster? Will sorting the files can make it better?

Example:

Run 5 minutes

df=spark.read.parquet(source_path).
df.write.json(output_path)

Run for hours

spark.read.parquet(source_path).createOrReplaceTempView("source_table")
sql="""
select cast(trunc(date,'yyyymmdd') as int) as day, a.*
from source_table a"""
spark.sql(sql).write.partitionBy("day").json(output_path)
like image 613
Gluz Avatar asked Jul 23 '17 20:07

Gluz


1 Answers

Try adding a repartition("day") before the write, like this:

spark
  .sql(sql)
  .repartition("day")
  .write
  .partitionBy("day")
  .json(output_path)

It should speed up your query.

like image 187
Glennie Helles Sindholt Avatar answered Sep 20 '22 14:09

Glennie Helles Sindholt