Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

s3 per object expiry

Tags:

amazon-s3

I know how to expire objects in an S3 bucket using object expiration rules given a certain prefix, however for my purposes, I would like to set the expiry date programmatically on a per object basis.

The Java SDK seems to indicate that this is possible as it has a setExpirationTime method, however whenever I set an expiration Date using this method, nothing seems to happen and the object never expires. Additionally, looking at the object properties through the aws console, no expiry appears to be set.

Is per file expiration not supported ? / Are there any extra steps I need to do to get it to work ? / If per file expiration is not supported, is it possible to exclude a file that matches an expiration prefix from being expired ?

Thanks in advance!

like image 562
Mac Adada Avatar asked Aug 29 '12 20:08

Mac Adada


People also ask

What is expired object in S3?

S3's new Object Expiration function allows you to define rules to schedule the removal of your objects after a pre-defined time period. The rules are specified in the Lifecycle Configuration policy that you apply to a bucket.

What is the life cycle of S3 bucket?

An S3 Lifecycle configuration is an XML file that consists of a set of rules with predefined actions that you want Amazon S3 to perform on objects during their lifetime. You can also configure the lifecycle by using the Amazon S3 console, REST API, AWS SDKs, and the AWS Command Line Interface (AWS CLI).

How often can you expect to lose data if you store 10000000 objects in S3?

Amazon S3 is designed for 99.999999999% (11 9s) of data durability. With that level of durability, you can expect that if you store 10,000,000 objects in Amazon S3, you should only expect to lose a single object every 10,000 years!

What is the storage limit in S3 for individual objects?

Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB. For objects larger than 100 MB, customers should consider using the Multipart Upload capability.


2 Answers

It doesn't look like per-object expiration is supported, but rather a per-bucket lifecycle configuration with up to 100 rules per configuration, as you have found.

A bucket has one lifecycle configuration. A lifecycle configuration can have up to 100 rules.

The lifetime value must be a nonzero positive integer. Amazon S3 calculates expiration time by adding the expiration period specified in the rule to the object creation time and rounding the resulting time to the next day midnight UTC.

If per file expiration is not supported, is it possible to exclude a file that matches an expiration prefix from being expired ?

It doesn't look like you can overlap rules, either.

Take care to ensure the rules don't overlap. For example, the following lifecycle configuration has a rule that sets objects with the prefix "documents" to expire after 30 days. The configuration also has another rule that sets objects with the prefix "documents/2011" to expire after 365 days. In this case, Amazon S3 returns an error message.

like image 131
GalacticJello Avatar answered Sep 23 '22 00:09

GalacticJello


An alternative way to accomplish this task is to make use of object tags.

Set a unique tag for each object of the objects you want to set lifetimes for, Then create a life cycle configuration rule for each object and in the filter element you can use the object specific tag.

This tag can be the object name, Also you have the ability to make a rule for a subset of objects that don't have common prefix.

You can find more info about life cycle configuration here http://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html

like image 20
demon36 Avatar answered Sep 24 '22 00:09

demon36