If I upload a file to S3 with the filename identical to a filename of an object in the bucket it overwrites it. What options exists to avoid overwriting files with identical filenames? I enabled versioning in my bucket thinking it will solve the problem but objects are still overwritten.
By default, when you upload the file with same name. It will overwrite the existing file. In case you want to have the previous file available, you need to enable versioning in the bucket.
With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
Amazon S3 has a global namespace. (i.e. No two S3 buckets can have the same name.) It's similar to how DNS works where each domain name must be unique. Therefore, you need to use a unique bucket name when creating S3 buckets.
S3 bucket creation prerequisites Unique names: S3 bucket names must be globally unique, making it impossible to create buckets with the same name across different accounts.
My comment from above doesn't work. I thought the WRITE
ACL would apply to objects as well, but it only works on buckets.
Since you enabled versioning, your objects aren't overwritten. But if you don't specify the version in your GET request or URL, the latest version will be taken. This means when you put and object into S3 you need to save the versionID the response tells you in order to retrieve the very first object.
See Amazon S3 ACL for read-only and write-once access for more.
You can also configure an IAM user with limited permissions. Writes are still writes (i.e., updates), but using an IAM user is a best practice anyway.
The owner (i.e., your "long-term access key and secret key") always has full control unless you go completely out of your way to disable it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With