Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

EC2 User Data to fetch S3 Object

I have an EC2 instance in a Public subnet,
The EC2 has a S3 Full Access Role attached to it.
I have a .jar in a S3 folder location.
The S3 Bucket is with default permissions, which are applied when the bucket is created.

What I want to do is implement a User Data script,
that will download the .jar on EC2 from S3 location.
I am able to wget the .jar only when I make the .jar Public,

If I do not make the .jar on the S3 Public, I am not able to wget it from the EC2,
I do not want to make the .jar Public for security reasons.

What should I do to get the .jar from S3, without making the .jar Public?

Here's my User Data script -

#! /bin/bash

sudo timedatectl set-timezone Asia/Kuala_Lumpur

sudo rm -rf /home/ubuntu/jarName.jar

sudo rm -rf /home/ubuntu/nohup.out

/usr/local/bin/aws s3api get-object --bucket myDemoBucket --key folderNameDemo/jarName.jar /home/ubuntu/jarName.jar

cd /home/ubuntu/

sudo nohup java -jar -Dspring.profiles.active=uat jarName.jar > nohup.out &

exit
like image 991
Ani Avatar asked Oct 19 '25 20:10

Ani


1 Answers

Consult the documentation for a better understanding of how EC2 Instance IAM Roles work.

S3 and other AWS APIs are not able directly identify the instance. Requests must still be signed. What IAM instance roles provide your instance with an easily-accessible, always fresh set of relatively short-lived, temporary credentials -- the familiar access key ID and access key secret, plus a token that must also be used in signing and submitting your request.

  • with these three elements, you can generate a signed URL at runtime and pass it to wget, or
  • you can fetch the file with aws-cli, which has built-in support for fetching and using the instance role credentials, or
  • if it makes sense in your environment, you can configure the bucket policy to trust the IP address the instance will use when connecting to S3, and allow any request from that address to succeed. This is straightforward if you are using a NAT instance or NAT gateway, or if you are using Elastic IP addresses on the instances. It is not supported if you configure a VPC endpoint for S3.

Perhaps the simplest of the above choices is using the aws-cli.

/usr/local/bin/aws s3api get-object --bucket example-bucket --key path/in/s3/to/my_object.txt /tmp/local-file-name.txt

Note that in scripts, it's usually a good idea to use fully-qualified paths for commands and files, since the current environment's path and working directory may not be known or as expected. In this case, I assume the aws CLI executable has been installed in /usr/local/bin/ and that you want to write the downloaded file to /tmp/. Customize these as appropriate for your environment.

See also http://docs.aws.amazon.com/cli/latest/userguide/installing.html for installlation and http://docs.aws.amazon.com/cli/latest/reference/s3api/get-object.html for usage of aws-cli for this application.

like image 159
Michael - sqlbot Avatar answered Oct 21 '25 10:10

Michael - sqlbot



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!