Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to control output files size in Spark Structured Streaming

We're considering using Spark Structured Streaming on a project. The input and output are parquet files on S3 bucket. Is it possible to control the size of the output files somehow? We're aiming at output files of size 10-100MB. As I understand, in traditional batch approach we could determine the output file sizes by adjusting the amount of partitions according to the size of the input dataset, is something similar possible in Structured Streaming?

like image 820
r.gl Avatar asked Oct 20 '25 21:10

r.gl


1 Answers

In Spark 2.2 or later the optimal option is to set spark.sql.files.maxRecordsPerFile

spark.conf.set("spark.sql.files.maxRecordsPerFile", n)

where n is tuned to reflect an average size of a row.

See

  • SPARK-18775 - Limit the max number of records written per file.
  • apache/spark@354e936187708a404c0349e3d8815a47953123ec
like image 112
user10938362 Avatar answered Oct 23 '25 00:10

user10938362



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!