We store files in Amazon AWS S3, and want to keep references to those files in a Document table in Postgres. I am looking for best practices. We use Python/Django, and currently store the URL that comes back from boto3.s3.key.Key().generate_url(...)
. But so many issues with that:
So, I'm considering storing the Bucket, Key, and Version in three separate fields, and creating the Key as a combination of the DB primary key plus a safely-encoded filename, but didn't know if there were better approaches?
Consider splitting read, write, and delete access. Allow only write access to users or services that generate and write data to S3 but don't need to read or delete objects. Define an S3 lifecycle policy to remove objects on a schedule instead of through manual intervention— see Managing your storage lifecycle.
Data is stored as objects within resources called “buckets”, and a single object can be up to 5 terabytes in size.
One Zone-IA, Glacier and Glacier Deep Archive are the most appropriate Amazon S3 storage classes for long-term archival. The Glacier tiers are the best for information that must be retained for years due to tax laws and regulatory guidelines.
Not sure if best-est approach, but we store:
UUID
type)That way you can at least:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With