I'm trying to use distcp
to copy some files from HDFS to Amazon s3. My Hadoop cluster connects to the internet through an HTTP proxy, but I can't figure out how to specify this when connecting to s3. I'm currently getting the issue:
httpclient.HttpMethodDirector: I/O exception (org.apache.commons.httpclient.ConnectTimeoutException) caught when processing request: The host did not accept the connection within timeout of 60000 ms
This indicates that it's trying to connect directly to amazon. How do I get distcp
to use the proxy host?
I post another answer here because it's the first SOW question that comes up in Google when asking for hdfs s3 proxy and the existing answer is not the best according to me.
Configuring S3 for HDFS is best done on the hdfs-site.xml file on each nodes. By this it works for distcp (to copy from HDFS to S3 and the opposite) but also with Impala and potentially other Hadoop components that can use S3.
So, add the following properties to your hdfs-site.xml :
<property>
<name>fs.s3a.access.key</name>
<value>your_access_key</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>your_secret_key</value>
</property>
<property>
<name>fs.s3a.proxy.host</name>
<value>your_proxy_host</value>
</property>
<property>
<name>fs.s3a.proxy.port</name>
<value>your_proxy_port</value>
</property>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With