I have a bucket that has a short lifecycle rule, everything older than 7 days gets deleted. The files that are added have dynamically generated names.
There is one file in the bucket that I would like to exclude from this rule, is there a way to exclude this file from the rule so it is never deleted?
There is not a way to exclude objects from rules that match them. Most likely, you will need to rearrange your objects using prefixes that meet your needs.
There is a hack... which would involve copying the file into itself frequently enough that it never ages enough to match the rule, but that is obviously delicate. The S3 PUT+Copy operation does allow an object to be copied on top of itself non-destructively without downloading and re-uploading, and this would reset the expiration timer.
But most likely a better solution is to prefix your random filenames with a few static characters. The S3 partition splitting implementation (the way S3 handles bucket capacity scaling) can apparently work just as well with with a static prefix (e.g. images/
) followed by random characters as it can with entirely random keys.
If the file is small enough so that it doesn't matter to pay for Glacier and S3 storage, you could also initiate a restore and set Days
to a very high number.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With