Anyone is using s3 on Frankfurt using hadoop/spark 1.6.0?
I am trying to store the result of a job on s3, my dependencies are declared as follows:
"org.apache.spark" %% "spark-core" % "1.6.0" exclude("org.apache.hadoop", "hadoop-client"),
"org.apache.spark" %% "spark-sql" % "1.6.0",
"org.apache.hadoop" % "hadoop-client" % "2.7.2",
"org.apache.hadoop" % "hadoop-aws" % "2.7.2"
I have set the following configuration:
System.setProperty("com.amazonaws.services.s3.enableV4", "true")
sc.hadoopConfiguration.set("fs.s3a.endpoint", ""s3.eu-central-1.amazonaws.com")
When calling saveAsTextFile
on my RDD it starts ok, saving everything on S3. However after some time when it is transferring from _temporary
to the final output result it yields the error:
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: XXXXXXXXXXXXXXXX, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
If I use hadoop-client
from spark package it not even start the transfer. The error occurs randomly, sometimes it works and sometimes don't.
In case you are using pyspark, the following worked for me
aws_profile = "your_profile"
aws_region = "eu-central-1"
s3_bucket = "your_bucket"
# see https://github.com/jupyter/docker-stacks/issues/127#issuecomment-214594895
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages=org.apache.hadoop:hadoop-aws:2.7.3 pyspark-shell"
# If this doesn't work you might have to delete your ~/.ivy2 directory to reset your package cache.
# (see https://github.com/databricks/spark-redshift/issues/244#issuecomment-239950148)
import pyspark
sc=pyspark.SparkContext()
# see https://github.com/databricks/spark-redshift/issues/298#issuecomment-271834485
sc.setSystemProperty("com.amazonaws.services.s3.enableV4", "true")
# see https://stackoverflow.com/questions/28844631/how-to-set-hadoop-configuration-values-from-pyspark
hadoop_conf=sc._jsc.hadoopConfiguration()
# see https://stackoverflow.com/questions/43454117/how-do-you-use-s3a-with-spark-2-1-0-on-aws-us-east-2
hadoop_conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoop_conf.set("com.amazonaws.services.s3.enableV4", "true")
hadoop_conf.set("fs.s3a.access.key", access_id)
hadoop_conf.set("fs.s3a.secret.key", access_key)
# see https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
hadoop_conf.set("fs.s3a.endpoint", "s3." + aws_region + ".amazonaws.com")
sql=pyspark.sql.SparkSession(sc)
path = s3_bucket + "your_file_on_s3"
dataS3=sql.read.parquet("s3a://" + path)
Inspired from the others answers, running the following directly in pyspark shell produced the desired output for me:
sc.setSystemProperty("com.amazonaws.services.s3.enableV4", "true") # fails without this
hc=sc._jsc.hadoopConfiguration()
hc.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hc.set("com.amazonaws.services.s3.enableV4", "true")
hc.set("fs.s3a.endpoint", end_point)
hc.set("fs.s3a.access.key",access_key)
hc.set("fs.s3a.secret.key",secret_key)
data = sc.textFile("s3a://bucket/file")
data.take(3)
Choose your endpoint at: list of endpoints I was able to fetch data from Asia Pacific (Mumbai)(ap-south-1) which is a Version 4 only region.
Please try to set the values below:
System.setProperty("com.amazonaws.services.s3.enableV4", "true")
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoopConf.set("com.amazonaws.services.s3.enableV4", "true")
hadoopConf.set("fs.s3a.endpoint", "s3." + region + ".amazonaws.com")
please set the region where that bucket is located, in my case it was: eu-central-1
and add dependency into gradle or in some other way:
dependencies {
compile 'org.apache.hadoop:hadoop-aws:2.7.2'
}
hope it will help.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With