Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why can't hadoop split up a large text file and then compress the splits using gzip?

I've recently been looking into hadoop and HDFS. When you load a file into HDFS, it will normally split the file into 64MB chunks and distribute these chunks around your cluster. Except it can't do this with gzip'd files because a gzip'd file can't be split. I completely understand why this is the case (I don't need anyone explaining why a gzip'd file can't be split up). But why couldn't HDFS take a plain text file as input and split it like normal, then compress each split using gzip separately? When any split is accessed, it's just decompressed on the fly.

In my scenario, each split is compressed completely independently. There's no dependencies between splits, so you don't need the entire original file to decompress any one of the splits. That is the approach this patch takes: https://issues.apache.org/jira/browse/HADOOP-7076, note that this is not what I'd want.

This seems pretty basic... what am I missing? Why couldn't this be done? Or if it could be done, why have the hadoop developers not looked down this route? It seems strange given how much discussion I've found regarding people wanting split gzip'd files in HDFS.

like image 625
onlynone Avatar asked Jun 28 '11 18:06

onlynone


People also ask

Can we compress a file in HDFS?

It's always good to use compression while storing data in HDFS. HDFS supports various types of compression algorithms such as LZO, BIZ2, Snappy, GZIP, and so on. Every algorithm has its own pros and cons when you consider the time taken to compress and decompress and the space efficiency.

How do I compress a file in Hadoop?

hadoop. io. compress. BZip2Codec; inputFiles = LOAD '/input/directory/uncompressed' using PigStorage(); STORE inputFiles INTO '/output/directory/compressed/' USING PigStorage();

How does Hadoop split the data?

When Hadoop submits a job, it splits the input data logically (Input splits) and these are processed by each Mapper. The number of Mappers is equal to the number of input splits created. InputFormat. getSplits() is responsible for generating the input splits which uses each split as input for each mapper job.

What is block compression in Hadoop?

If it is a text file then compression happens at the file level. But if it is SequenceFile then compression could be at record level or block level. Note that here block means a buffer in using sequence file and not the hdfs block. If it is block compression then multiple records are compressed into a block at once.


2 Answers

The simple reason is the design principle of "separation of concerns".

If you do what you propose then HDFS must know what the actual bits and bytes of the file mean. Also HDFS must be made able to reason about it (i.e. extract, decompress, etc.). In general you don't want this kind of mixing up responsibilities in software.

So the 'only' part that is to understand what the bits mean is the application that must be able to read it: which is commonly written using the MapReduce part of Hadoop.

As stated in the Javadoc of HADOOP-7076 (I wrote that thing ;) ):

Always remember that there are alternative approaches:

  • Decompress the original gzipped file, split it into pieces and recompress the pieces before offering them to Hadoop.
    For example: Splitting gzipped logfiles without storing the ungzipped splits on disk
  • Decompress the original gzipped file and compress using a different splittable codec. For example BZip2Codec or not compressing at all.

HTH

like image 155
Niels Basjes Avatar answered Nov 03 '22 00:11

Niels Basjes


The HDFS has a limited scope of being only a distributed file-system service and doesn't do heavy-lifting operations such as compressing the data. The actual process of data compression is delegated to distributed execution frameworks like Map-Reduce, Spark, Tez etc. So compression of data/files is the concern of the execution framework and not that of the File System.

Additionally the presence of container file formats like Sequence-file, Parquet etc negates the need of HDFS to compress the Data blocks automatically as suggested by the question.

So to summarize due to design philosophy reasons any compression of data must be done by the execution engine not by the file system service.

like image 41
rogue-one Avatar answered Nov 03 '22 00:11

rogue-one