Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SPARK read.json throwing java.io.IOException: Too many bytes before newline

I am getting following error on reading a large 6gb single line json file:

Job aborted due to stage failure: Task 5 in stage 0.0 failed 1 times, most recent failure: Lost task 5.0 in stage 0.0 (TID 5, localhost): java.io.IOException: Too many bytes before newline: 2147483648

spark does not read json files with new lines hence the entire 6 gb json file is on a single line:

jf = sqlContext.read.json("jlrn2.json")

configuration:

spark.driver.memory 20g
like image 517
stackit Avatar asked Mar 13 '23 22:03

stackit


1 Answers

Yep, you have more than Integer.MAX_VALUE bytes in your line. You need to split it up.

Keep in mind that Spark is expecting each line to be a valid JSON document, not the file as a whole. Below is the relevant line from the Spark SQL Progamming Guide

Note that the file that is offered as a json file is not a typical JSON file. Each line must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail.

So if your JSON document is in the form...

[
  { [record] },
  { [record] }
]

You'll want to change it to

{ [record] }
{ [record] }
like image 114
Mike Park Avatar answered Mar 15 '23 14:03

Mike Park