During GitlabCi I got: "fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied"
My bucket policy :
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME/*"
}
]
}
In gitlabCI settings set:
My .gitlab-ci.yml
image: docker:latest
stages:
- build
- deploy
build:
stage: build
image: node:8.11.3
script:
- export API_URL="d144iew37xsh40.cloudfront.net"
- npm install
- npm run build
- echo "BUILD SUCCESSFULLY"
artifacts:
paths:
- public/
expire_in: 20 mins
environment:
name: production
only:
- master
deploy:
stage: deploy
image: python:3.5
dependencies:
- build
script:
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- export S3_BUCKET_NAME=$S3_BUCKET_NAME
- export DISTRIBUTION_ID=$DISTRIBUTION_ID
- pip install awscli --upgrade --user
- export PATH=~/.local/bin:$PATH
- aws s3 sync --acl public-read --delete public $S3_BUCKET_NAME
- aws cloudfront create-invalidation --distribution-id
$DISTRIBUTION_ID --paths '/*'
- echo "DEPLOYED SUCCESSFULLY"
environment:
name: production
only:
- master
I'm not sure the accepted answer is actually acceptable, as it simply allows all operations on the bucket. Also the Sid is misleading... ;-)
This AWS article mentions the required permissions for aws s3 sync
.
This is how a corresponding policy looks like:
{
"Version": "version_id",
"Statement": [
{
"Sid": "AllowBucketSync",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
}
] }
I had this problem recently. No matter what I did, no matter what permissions I provided, I kept getting "An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied" when running aws s3 ls <bucket>
I had forgotten that I have multiple aws profiles configured in my environment. The aws command was using the default profile, which has a different set of access keys. I had to specify the --profile flag to the command:
aws s3 ls <bucket> --profile <correct profile>
That worked. It's a niche situation, but maybe it'll help someone out.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With