Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark Parquet read error : java.io.EOFException: Reached the end of stream with XXXXX bytes left to read

While reading parquet files in spark, if you face the below problem.


App > Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 44, 10.23.5.196, executor 2): java.io.EOFException: Reached the end of stream with 193212 bytes left to read App > at org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104) App > at org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127) App > at org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91) App > at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174) App > at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805) App > at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:301) App > at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:256) App > at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:159) App > at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39) App > at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:124) App > at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:215)


For below spark commands:

val df = spark.read.parquet("s3a://.../file.parquet")
df.show(5, false)
like image 800
Amresh Jha Avatar asked Oct 30 '19 06:10

Amresh Jha


1 Answers

For me above didn't do the trick, but the following did:

--conf spark.hadoop.fs.s3a.experimental.input.fadvise=sequential

Not sure why, but what gave me a hint was this issue and some details about the options here.

like image 101
sspaeti Avatar answered Dec 15 '22 10:12

sspaeti