I want to secure files located in S3 buckets, and ensure that no sensitive files are being shared.
I am aware of various ways (in the console and using scripts) to view which buckets have public permissions.
However, objects can be granted anonymous read permissions even when they are in a private bucket.
It can be hard to track/audit files/objects which are publicly readable because I cannot see any way to find them other than browsing through every single path in the AWS console.
Is there a way to list all the objects which have anonymous (read) permissions in a bucket? Any method would be fine, including AWSCLI, Boto, REST etc.
I considered trying to use an anonymous AWSCLI profile but this would not allow listing bucket contents so it could only be used to test files individually.
I suppose I could create a script using Boto (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#objectacl). Is that the only way, or is there an official method such as an AWSCLI command?
What you're talking about is Object ACLs. In the spirit of job-to-be-done, I want to point out that you can configure a bucket to deny public object ACLs. This is probably among the best enterprise practices for prevention. Among the best enterprise practices for auditing and verifying continuously is described here.
Update: If you're interested in monitoring & auditing bucket-level ACLs, take a look at this managed AWS Config solution.
However, if you're looking for a bash script/tool using the aws-cli
(which is the tag for this question), this will do the trick:
bucket="my-bucket-name"
search_term="AllUsers"
for key in $(aws s3 ls --recursive s3://$bucket/ | awk '{$1=$2=$3=""; print $0}' | sed 's/^[ \t]*//'); do
acl=$(aws s3api get-object-acl --bucket $bucket --key $key) &&
result_found=$(echo $acl | grep -q $search_term) &&
if $result_found; then
echo $key;
echo $acl;
fi;
done
Here's what it does:
I generalized this problem to "echo all keys and their ACLs within a bucket iff that ACL contains a given $search_term
", so if others are stumbling across a similar, but subtly different problem, this solution will still be helpful, insofar as they change the $search_term
to something that suits their problem.
Ideally (assuming you want no public objects) if you run this... nothing should show up.
Keep in mind, this solution won't scale well for massive buckets with tons and tons of objects.
Here's a multi-threaded solution in Ruby:
# Gemfile
source 'https://rubygems.org' do
gem 'aws-sdk'
gem 'thread'
end
# find_public_s3_objects.rb
require 'aws-sdk-s3' # v2: require 'aws-sdk'
require 'thread/pool'
BUCKET = ARGV[0] or raise("expected bucket")
s3 = Aws::S3::Resource.new(region: 'us-east-1')
count = 0
pool = Thread.pool 8
mutex = Mutex.new
s3.bucket(BUCKET).objects.each do |object|
pool.process do
grants = object.acl.grants
mutex.synchronize do
count += 1
if count % 100 == 0
$stderr.write "#{count}..."
end
end
if grants.map { |x| x.grantee.uri }.any? { |x| x =~ /AllUsers/ }
mutex.synchronize do
puts object.key
end
end
end
end
pool.shutdown
Then you run it like this:
bundle exec ruby find_public_s3_objects.rb my-bucket-name
It's much faster than the Bash-based solution provided above.
Originally from Faraday's blog.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With