Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark in AWS: "S3AbortableInputStream: Not all bytes were read from the S3ObjectInputStream"

I'm running a PySpark application in:

  • emr-5.8.0
  • Hadoop distribution:Amazon 2.7.3
  • Spark 2.2.0

I'm running on a very large cluster. The application reads a few input files from s3. One of these is loaded into memory and broadcast to all the nodes. The other is distributed to the disks of each node in the cluster using the SparkFiles functionality. The application works but performance is slower than expected for larger jobs. Looking at the log files I see the following warning repeated almost constantly:

WARN S3AbortableInputStream: Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.

It tends to happen after a message about accessing the file that was loaded into memory and broadcasted. Is this warning something to warn about? How to avoid it?

Google searching brings up several people dealing with this warning in native Hadoop applications, but I've found nothing about it in Spark or PySpark and can't figure out how those solutions would apply for me.

Thanks!

like image 792
Danny Avatar asked Jan 10 '18 21:01

Danny


2 Answers

Ignore it. The more recent versions of the AWS SDK always tell you off when you call abort() on the input stream, even when it's what you need to do when moving around a many-GB file. For small files, yes, reading to the EOF is the right thing to do, but with big files, no.

See: SDK repeatedly complaining "Not all bytes were read from the S3ObjectInputStream

If you see this a lot, and you are working with columnar data formats such as ORC and Parquet, switch the input streams over to random IO over sequential by setting the property fs.s3a.experimental.fadvise to random. This stops it from ever trying to read the whole file, and instead only reading small blocks. Very bad for full file reads (including .gz files), but transforms column IO.

Note, there's a small fix in S3A for Hadoop 3.x on the final close HADOOP-14596. Up to the EMR team whether to backport or not.

+I'll add some text to the S3A troubleshooting docs. The ASF have never shipped a hadoop release with this problem, but if people are playing mix-and-match with the AWS SDK (very brittle), then this may surface

like image 106
stevel Avatar answered Nov 12 '22 11:11

stevel


Note: This only applies to non-EMR installations as AWS doesn't offer s3a.


Before choosing to ignore the warnings or altering your input streams via settings per Steve Loughran's answer, make absolutely sure you're not using s3://bucket/path notation.

Starting with Spark 2, you should leverage the s3a protocol via s3a://bucket/path, which would likely address the warnings you're seeing (it did for us) and substantially boost the speed of S3 interactions. See this answer for detail on a breakdown of differences.

like image 1
bsplosion Avatar answered Nov 12 '22 09:11

bsplosion