How do you make an AWS S3 public folder private again?
I was testing out some staging data, so I made the entire folder public within a bucket. I'd like to restrict its access again. So how do I make the folder private again?
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Bucket name list, choose the name of the bucket that you want. Choose Permissions. Choose Edit to change the public access settings for the bucket.
By default, all S3 buckets are private and can be accessed only by users who are explicitly granted access. Restrict access to your S3 buckets or objects by doing the following: Writing IAM user policies that specify the users that can access specific buckets and objects.
Thanks! S3 isn't a Linux file system. It won't retain any Linux permissions because they don't apply to S3.
Block Public Access acts as an additional layer of protection to prevent Amazon S3 buckets from being made public accidentally. By default, all content in Amazon S3 is private. You can then make content accessible in several different ways: At the bucket-level, by creating a Bucket Policy on the desired bucket.
The accepted answer works well - seems to set ACLs recursively on a given s3 path too. However, this can also be done more easily by a third-party tool called s3cmd - we use it heavily at my company and it seems to be fairly popular within the AWS community.
For example, suppose you had this kind of s3 bucket and dir structure: s3://mybucket.com/topleveldir/scripts/bootstrap/tmp/
. Now suppose you had marked the entire scripts
"directory" as public using the Amazon S3 console.
Now to make the entire scripts
"directory-tree" recursively (i.e. including subdirectories and their files) private again:
s3cmd setacl --acl-private --recursive s3://mybucket.com/topleveldir/scripts/
It's also easy to make the scripts
"directory-tree" recursively public again if you want:
s3cmd setacl --acl-public --recursive s3://mybucket.com/topleveldir/scripts/
You can also choose to set the permission/ACL only on a given s3 "directory" (i.e. non-recursively) by simply omitting --recursive
in the above commands.
For s3cmd
to work, you first have to provide your AWS access and secret keys to s3cmd via s3cmd --configure
(see http://s3tools.org/s3cmd for more details).
From what I understand, the 'Make public' option in the managment console recursively adds a public grant for every object 'in' the directory. You can see this by right-clicking on one file, then click on 'Properties'. You then need to click on 'Permissions' and there should be a line:
Grantee: Everyone [x] open/download [] view permissions [] edit permission.
If you upload a new file within this directory it won't have this public access set and therefore be private.
You need to remove public read permission one by one, either manually if you only have a few keys or by using a script.
I wrote a small script in Python with the 'boto' module to recursively remove the 'public read' attribute of all keys in a S3 folder:
#!/usr/bin/env python #remove public read right for all keys within a directory #usage: remove_public.py bucketName folderName import sys import boto3 BUCKET = sys.argv[1] PATH = sys.argv[2] s3client = boto3.client("s3") paginator = s3client.get_paginator('list_objects_v2') page_iterator = paginator.paginate(Bucket=BUCKET, Prefix=PATH) for page in page_iterator: keys = page['Contents'] for k in keys: response = s3client.put_object_acl( ACL='private', Bucket=BUCKET, Key=k['Key'] )
I tested it in a folder with (only) 2 objects and it worked. If you have lots of keys it may take some time to complete and a parallel approach might be necessary.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With