Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Writing Spark dataframe as parquet to S3 without creating a _temporary folder

Using pyspark I'm reading a dataframe from parquet files on Amazon S3 like

dataS3 = sql.read.parquet("s3a://" + s3_bucket_in)

This works without problems. But then I try to write the data

dataS3.write.parquet("s3a://" + s3_bucket_out)

I do get the following exception

py4j.protocol.Py4JJavaError: An error occurred while calling o39.parquet.
: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
Relative path in absolute URI: s3a://<s3_bucket_out>_temporary

It seems to me that Spark is trying to create a _temporary folder first, before it is writing to write into the given bucket. Can this be prevent somehow, so that spark is writing directly to the given output bucket?

like image 351
asmaier Avatar asked Sep 28 '17 08:09

asmaier


2 Answers

You can't eliminate the _temporary file as that's used to keep the intermediate work of a query hidden until it's complete

But that's OK, as this isn't the problem. The problem is that the output committer gets a bit confused trying to write to the root directory (can't delete it, see)

You need to write to a subdirectory under a bucket, with a full prefix. e.g. s3a://mybucket/work/out .

I should add that trying to commit data to S3A is not reliable, precisely because of the way it mimics rename() by what is something like ls -rlf src | xargs -p8 -I% "cp % dst/% && rm %". Because ls has delayed consistency on S3, it can miss newly created files, so not copy them.

See: Improving Apache Spark for the details.

Right now, you can only reliably commit to s3a by writing to HDFS and then copying. EMR s3 works around this by using DynamoDB to offer a consistent listing

like image 89
stevel Avatar answered Oct 15 '22 21:10

stevel


I had the same issue when writing the root of S3 bucket:

df.save("s3://bucketname")

I resolved it by adding a / after the bucket name:

df.save("s3://bucketname/")
like image 30
Lawrence Chan Avatar answered Oct 15 '22 21:10

Lawrence Chan