I'm running a Django web-app on AWS Elastic Beanstalk that needs specific files available to it to run (in fact, an nltk
corpus of stopwords). Since instances come and go, I copied the needed folder to the S3 bucket that was created by my elastic beanstalk and planned to add a copy command using awscli
to my elastic beanstalk configuration file. But I can't get it to work.
Instances launched by my beanstalk should have read access to the S3 bucket because this is the bucket created automatically by beanstalk. So beanstalk also created an IAM role aws-elasticbeanstalk-ec2-role
which is an instance profile that is attached to every instance it launches. This role includes the AWSElasticBeanstalkWebTier
policy which seems to grant both read and write access to the S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Action": [
"s3:Get*",
"s3:List*",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-*",
"arn:aws:s3:::elasticbeanstalk-*/*"
]
}
]
}
I tried adding the following command to .ebextensions/my_app.config
:
commands:
01_copy_nltk_data:
command: aws s3 cp s3://<my_bucket>/nltk_data /usr/local/share/
But I get the following error when I try to deploy even though I can see the folder in my S3 console
Command failed on instance. Return code: 1 Output: An error occurred (404) when calling the HeadObject operation: Key "nltk_data" does not exist
Any ideas?
Thanks!
EC2 instances can be granted access to upload files on S3 using the IAM role. An IAM role with access to upload data on S3 is created and attached to the EC2 instance.
You can use cp to copy the files from an s3 bucket to your local system. Use the following command: $ aws s3 cp s3://bucket/folder/file.txt .
AWS Support had the answer: my nltk_data
folder had subfolders and files inside so the aws s3 cp
command needed the --recursive
option.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With