Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark/Hadoop throws exception for large LZO files

I'm running an EMR Spark job on some LZO-compressed log-files stored in S3. There are several logfiles stored in the same folder, e.g.:

...
s3://mylogfiles/2014-08-11-00111.lzo
s3://mylogfiles/2014-08-11-00112.lzo
...

In the spark-shell I'm running a job that counts the lines in the files. If I count the lines individually for each file, there is no problem, e.g. like this:

// Works fine
...
sc.textFile("s3://mylogfiles/2014-08-11-00111.lzo").count()
sc.textFile("s3://mylogfiles/2014-08-11-00112.lzo").count()
...

If I use a wild-card to load all the files with a one-liner, I get two kinds of exceptions.

// One-liner throws exceptions
sc.textFile("s3://mylogfiles/*.lzo").count()

The exceptions are:

java.lang.InternalError: lzo1x_decompress_safe returned: -6
    at com.hadoop.compression.lzo.LzoDecompressor.decompressBytesDirect(Native Method)

and

java.io.IOException: Compressed length 1362309683 exceeds max block size 67108864 (probably corrupt file)
    at com.hadoop.compression.lzo.LzopInputStream.getCompressedData(LzopInputStream.java:291)

It seems to me that the solution is hinted by the text given with the last exception, but I don't know how to proceed. Is there a limit to how big LZO files are allowed to be, or what is the issue?

My question is: Can I run Spark queries that load all LZO-compressed files in an S3 folder, without getting I/O related exceptions?

There are 66 files of roughly 200MB per file.

EDIT: The exception only occurs when running Spark with Hadoop2 core libs (ami 3.1.0). When running with Hadoop1 core libs (ami 2.4.5), things work fine. Both cases were tested with Spark 1.0.1.

like image 284
Pimin Konstantin Kefaloukos Avatar asked Aug 11 '14 16:08

Pimin Konstantin Kefaloukos


2 Answers

kgeyti's answer works fine, but:

LzoTextInputFormat introduces a performance hit, since it checks for an .index file for each LZO file. This can be especially painful with many LZO files on S3 (I've experienced up to several minutes delay, caused by thousands of requests to S3).

If you know up front that your LZO files are not splittable, a more performant solution is to create a custom, non-splittable input format:

import org.apache.hadoop.fs.Path
import org.apache.hadoop.mapreduce.JobContext
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat

class NonSplittableTextInputFormat extends TextInputFormat {
    override def isSplitable(context: JobContext, file: Path): Boolean = false
}

and read the files like this:

context.newAPIHadoopFile("s3://mylogfiles/*.lzo",
  classOf[NonSplittableTextInputFormat],
  classOf[org.apache.hadoop.io.LongWritable],
  classOf[org.apache.hadoop.io.Text])
.map(_._2.toString)
like image 60
Eric Eijkelenboom Avatar answered Oct 30 '22 10:10

Eric Eijkelenboom


I haven't run into this specific issue myself, but it looks like .textFile expects files to be splittable, much like the Cedrik's problem of Hive insisting on using CombineFileInputFormat

You could either index your lzo files, or try using the LzoTextInputFormat - I'd be interested to hear if that works better on EMR:

sc.newAPIHadoopFile("s3://mylogfiles/*.lz", 
    classOf[com.hadoop.mapreduce.LzoTextInputFormat],
    classOf[org.apache.hadoop.io.LongWritable],
    classOf[org.apache.hadoop.io.Text])
  .map(_._2.toString) // if you just want a RDD[String] without writing a new InputFormat
  .count
like image 29
jkgeyti Avatar answered Oct 30 '22 09:10

jkgeyti