Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does Amazon S3 have a limit to MaxKeys when calling ListObjects?

Tags:

amazon-s3

I always thought there was a 1,000 key limit when calling ListObjects in Amazon S3. However, I just made a call and it's pulling 1,080. But even their docs say there is a limit of 1,000.

I tried setting the MaxKeys setting to 1,000 but it still pulls 1,080 results. My code:

$iterator = $s3->getIterator('ListObjects', array(
    'Bucket' => 'BUCKETNAME',
    'MaxKeys' => 1000
));

It is however pulling folders as keys. But I certainly don't have 80 of them.

Two questions:

  1. Is my code wrong?
  2. Has Amazon lifted the 1000 key restriction? Is there a new limit?

Thanks in advance!

like image 554
Ben Sinclair Avatar asked Aug 29 '13 13:08

Ben Sinclair


People also ask

Does AWS S3 have a limit?

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB.

How many requests can S3 handle?

Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned prefix. There are no limits to the number of prefixes in a bucket.

Which Amazon S3 bucket policy can limit access to a specific object?

You can use the NotPrincipal element of an IAM or S3 bucket policy to limit resource access to a specific set of users.

What is the maximum throughput for S3 put Post copy delete operations on a per prefix basis?

S3 can achieve at least 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix in a bucket.


2 Answers

The S3 API limit hasn't changed, it's still limited to a maximum of 1000 keys/response.

With the PHP SDK v1 a single request returned up to 1000 keys and to get the rest you needed to do a second request with the marker option.

The new PHP SDK (v2) has a concept of Iterators which abstracts the process of doing these multiple, consecutive requests. This makes getting ALL of your objects much easier.

like image 178
dcro Avatar answered Oct 21 '22 08:10

dcro


By default the API returns up to 1,000 key names. The response might contain fewer keys but will never contain more. A better implementation would be use the newer ListObjectsV2 API:

 List<S3ObjectSummary> docList=new ArrayList<>();
    ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName).withPrefix(folderFullPath);
    ListObjectsV2Result listing;
    do{
        listing=this.getAmazonS3Client().listObjectsV2(req);
        docList.addAll(listing.getObjectSummaries());
        String token = listing.getNextContinuationToken();
        req.setContinuationToken(token);
        LOG.info("Next Continuation Token for listing documents is :"+token);
    }while (listing.isTruncated());
like image 1
atul jha Avatar answered Oct 21 '22 09:10

atul jha