How can I access my S3 bucket from a container without having my AWS credentials in the code?
My code also auto-deploy, so having it as an Env-variable is also no good (the deployment script is in the repository - and the credentials shouldn's be there either ) .
I tried to look into IAM roles, but couldn't wrap my head around something that will help my use-case.
You can access an S3 bucket privately without authentication when you access the bucket from an Amazon Virtual Private Cloud (Amazon VPC). However, make sure that the VPC endpoint used points to Amazon S3.
Using cross-account IAM roles simplifies provisioning cross-account access to S3 objects that are stored in multiple S3 buckets. As a result, you don't need to manage multiple policies for S3 buckets. This method allows cross-account access to objects owned or uploaded by another AWS account or AWS services.
If you are running containers on an EC2 instance directly (without using ECS service) then you need to create an IAM role
and attach appropriate policy to it (such as AmazonS3FullAccess
, if you need all rights for S3, if you only need to read the contents of S3, then you can add AmazonS3ReadOnlyAccess
policy). After you have created this role you can attach it to the EC2 instance where you are running your container.
If you are using ECS service, then you can attach this role to the task in which you define your containers (it is still possible to attach it to the underlying EC2 container instance - but only in case of EC2 launch type, not Fargate - for the container to assume that role, but it is preferred to be as granular as possible - individual tasks having their own roles).
You should never add AWS credentials to your code or store them in an EC2 instance/container, that is why you have roles.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With