I've created a hierarchy in S3 via the AWS S3 Management Console. If I run the following code to list the bucket:
AmazonS3 s3 = new AmazonS3Client(CRED); ListObjectsRequest lor = new ListObjectsRequest() .withBucketName("myBucket") .withPrefix("code/"); ObjectListing objectListing = s3.listObjects(lor); for (S3ObjectSummary summary: objectListing.getObjectSummaries()) { System.out.println(summary.getKey()); }
I get:
code/ code/03000000-0001-0000-0000-000000000000/ code/03000000-0001-0000-0000-000000000000/special.js code/03000000-0001-0000-0000-000000000000/test.js code/03000000-0002-0000-0000-000000000000/
Which is exactly what I would expect. If I add a delimiter though, so that I only list the content directly under "code/" I now don't get any sub "directories" back.
Change line above (add withDelimiter() on the end) to:
ListObjectsRequest lor = new ListObjectsRequest().withBucketName("myBucket") .withPrefix("code/") .withDelimiter("/");
And I now only get:
code/
I know that S3 doesn't have "directories", instead delimited keys, but this behaviour seems odd? How would I list what is only immediately below "code"?
Directories don't actually exist within S3 buckets. The entire file structure is actually just one flat single-level container of files. The illusion of directories are actually created based on naming the files names like dirA/dirB/file .
To list all files, located in a folder of an S3 bucket, use the s3 ls command, passing in the entire path to the folder and setting the --recursive parameter. Copied! The output from the command only shows the files in the /my-folder-1 directory.
If the IAM user and S3 bucket belong to the same AWS account, then you can grant the user access to a specific bucket folder using an IAM policy. As long as the bucket policy doesn't explicitly deny the user access to the folder, you don't need to update the bucket policy if access is granted by the IAM policy.
How to Download a Folder from AWS S3 # Use the s3 cp command with the --recursive parameter to download an S3 folder to your local file system. The s3 cp command takes the S3 source folder and the destination directory as inputs and downloads the folder.
Where you have keys that have no content S3 considers them "Common Prefixes":
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/ObjectListing.html#getCommonPrefixes%28%29
public List getCommonPrefixes()
Gets the common prefixes included in this object listing. Common prefixes are only present if a delimiter was specified in the original request.
Each common prefix represents a set of keys in the S3 bucket that have been condensed and omitted from the object summary results. This allows applications to organize and browse their keys hierarchically, similar to how a file system organizes files into directories.
For example, consider a bucket that contains the following keys:
"foo/bar/baz"
"foo/bar/bash"
"foo/bar/bang"
"foo/boo"If calling listObjects with the prefix="foo/" and the delimiter="/" on this bucket, the returned S3ObjectListing will contain one entry in the common prefixes list ("foo/bar/") and none of the keys beginning with that common prefix will be included in the object summaries list.
Returns: The list of common prefixes included in this object listing, which might be an empty list if no common prefixes were found.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With