When I'm using spark, I sometimes run into one huge file in a HIVE table, and I sometimes am trying to process many smaller files in a HIVE table.
I understand that when tuning spark jobs, how it works depends on whether or not the files are splittable. In this page from cloudera, it says that we should be aware of whether or not the files are splittable:
...For example, if your data arrives in a few large unsplittable files...
How do I know if my file is splittable?
How do I know the number of partitions to use if the file is splittable ?
Is it better to err on the side of more partitions if I'm trying to write a piece of code that will work on any HIVE table, i.e. either of the two cases described above?
Considering Spark accepts Hadoop input files, have a look at below image.
Only bzip2
formatted files are splitable and other formats like zlib, gzip, LZO, LZ4 and Snappy
formats are not splitable.
Regarding your query on partition, partition does not depend on file format you are going to use. It depends on content in the file - Values of partitioned column like date etc.
EDIT 1: Have a look at this SE question and this working code on Spark reading zip file.
JavaPairRDD<String, String> fileNameContentsRDD = javaSparkContext.wholeTextFiles(args[0]);
JavaRDD<String> lineCounts = fileNameContentsRDD.map(new Function<Tuple2<String, String>, String>() {
@Override
public String call(Tuple2<String, String> fileNameContent) throws Exception {
String content = fileNameContent._2();
int numLines = content.split("[\r\n]+").length;
return fileNameContent._1() + ": " + numLines;
}
});
List<String> output = lineCounts.collect();
EDIT 2:
LZO files can be splittable.
LZO files can be split as long as the splits occur on block boundaries
Refer to this article for more details.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With