I'm trying to change the CacheControl attribute from a file which is in S3 already. I've found that my best option is copying this object to the same path changing its metadata. The code is pretty simple:
file_key = 'index.html'
s3_object = s3_resource.Object(bucket_name, file_key)
s3_object.copy_from(CopySource={'Bucket':bucket_name, 'Key':file_key},
CacheControl='no-cache',
MetadataDirective='REPLACE')
This code doesn't work without the MetadataDirective='REPLACE', but it makes the file lose all its other metadatas. I could set all the metadatas manually, but it could cause issues in the future.
Is there a way of changing one metadata and keep all others?
I ran into this as well and was able to piece together a solution from some documentation & other peoples' solutions. The key to doing this without losing existing metadata is to explicitly set the metadata from the existing object:
bucket_name = "xxxxx"
key = "yyyyy"
s3 = boto3.resource("s3",
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=AWS_REGION,
)
s3_object = s3.Object(bucket_name, key)
s3_object.copy_from(
CopySource={"Bucket": bucket_name, "Key": key},
CacheControl="max-age=86400",
Metadata=s3_object.metadata, # This copies existing metadata
MetadataDirective="REPLACE",
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With