Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do you control the size of the output file?

Tags:

In spark, what is the best way to control file size of the output file. For example, in log4j, we can specify max file size, after which the file rotates.

I am looking for similar solution for parquet file. Is there a max file size option available when writing a file?

I have few workarounds, but none is good. If I want to limit files to 64mb, then One option is to repartition the data and write to temp location. And then merge the files together using the file size in the temp location. But getting the correct file size is difficult.

like image 452
user447359 Avatar asked Aug 28 '16 02:08

user447359


People also ask

How do I check the size of a file in PySpark?

Similar to Python Pandas you can get the Size and Shape of the PySpark (Spark with Python) DataFrame by running count() action to get the number of rows on DataFrame and len(df. columns()) to get the number of columns.

What is parquet block size?

parquet. block-size parameter is 268435456 (256 MB), the same size as file system chunk sizes. In previous versions of Drill, the default value was 536870912 (512 MB).


1 Answers

It's impossible for Spark to control the size of Parquet files, because the DataFrame in memory needs to be encoded and compressed before writing to disks. Before this process finishes, there is no way to estimate the actual file size on disk.

So my solution is:

  • Write the DataFrame to HDFS, df.write.parquet(path)
  • Get the directory size and calculate the number of files

    val fs = FileSystem.get(sc.hadoopConfiguration) val dirSize = fs.getContentSummary(path).getLength val fileNum = dirSize/(512 * 1024 * 1024)  // let's say 512 MB per file 
  • Read the directory and re-write to HDFS

    val df = sqlContext.read.parquet(path) df.coalesce(fileNum).write.parquet(another_path) 

    Do NOT reuse the original df, otherwise it will trigger your job two times.

  • Delete the old directory and rename the new directory back

    fs.delete(new Path(path), true) fs.rename(new Path(newPath), new Path(path)) 

This solution has a drawback that it needs to write the data two times, which doubles disk IO, but for now this is the only solution.

like image 122
soulmachine Avatar answered Sep 23 '22 19:09

soulmachine