I created my Dockerrun.aws.json
file and uploaded it during the creation of my Beanstalk (docker) environment. I also uploaded the .dockercfg
file created by the "docker login" command into the S3 bucket specified in the Dockerrun.aws.json
configuration.
However, when I attempt to start-up the environment I receive the error (bottom of post) stating that the EC2 instance doesn't have access to the .dockercfg
file in the bucket. How do I make sure the beanstalk application can access the config json file in the provided S3 bucket?
Thanks! (error below)
i-64c62de7 Severe 1 day - - - - - - - - - - 0.00 0.01 0.3 0.0 0.0 99.6 0.1
Application deployment failed at 2016-02-27T04:30:54Z with exit status 1 and error: Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed.
Traceback (most recent call last):
File "/opt/elasticbeanstalk/containerfiles/support/download_auth.py", line 18, in
download_auth(argv[1], argv[2], get_instance_identity()['document']['region'])
File "/opt/elasticbeanstalk/containerfiles/support/download_auth.py", line 15, in download_auth
key.get_contents_to_filename('/root/.dockercfg')
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 1712, in get_contents_to_filename
response_headers=response_headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 1650, in get_contents_to_file
response_headers=response_headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 1482, in get_file
query_args=None)
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 1514, in _get_file_internal
override_num_retries=override_num_retries)
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 343, in open
override_num_retries=override_num_retries)
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 303, in open_read
self.resp.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
AccessDeniedAccess Denied910AD275D3E3110A682j0cjMsfurjyy/PGT3W9wRxI+4sh+rrESuw2WpInERcn4p4f9XGwBFdpBmDYQc
Failed to download authentication credentials dockercfg from my-s3-bucket.
You have to make sure the AIM role you are using has access to your bucket and key. Something like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::mybucket"
]
},
{
"Sid": "S3ObjectAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
If you are not doing this, you should be pointing to a IAM from your .ebextensions rather than allowing EB to be creating its own, so you can control this
- namespace: aws:autoscaling:launchconfiguration
option_name: IamInstanceProfile
value: arn:aws:iam::xxxxxxxxx:instance-profile/yourRole
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With