Files uploaded to Amazon S3 that are smaller than 5GB have an ETag that is simply the MD5 hash of the file, which makes it easy to check if your local files are the same as what you put on S3.
But if your file is larger than 5GB, then Amazon computes the ETag differently.
For example, I did a multipart upload of a 5,970,150,664 byte file in 380 parts. Now S3 shows it to have an ETag of 6bcf86bed8807b8e78f0fc6e0a53079d-380
. My local file has an md5 hash of 702242d3703818ddefe6bf7da2bed757
. I think the number after the dash is the number of parts in the multipart upload.
I also suspect that the new ETag (before the dash) is still an MD5 hash, but with some meta data included along the way from the multipart upload somehow.
Does anyone know how to compute the ETag using the same algorithm as Amazon S3?
Calculating the S3 ETag for a local file Read the file in chunks of 173015040 bytes. Calculate the MD5 checksum for each chunk and store it for later use. Calculate the md5 hexdigest of the concatenated checksums.
Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB. For objects larger than 100 MB, customers should consider using the Multipart Upload capability.
Each file on S3 gets an ETag, which is essentially the md5 checksum of that file. Comparing md5 hashes is really simple but Amazon calculates the checksum differently if you've used the multipart upload feature.
When you upload large files to Amazon S3, it's a best practice to leverage multipart uploads. If you're using the AWS Command Line Interface (AWS CLI), then all high-level aws s3 commands automatically perform a multipart upload when the object is large. These high-level commands include aws s3 cp and aws s3 sync.
Say you uploaded a 14MB file to a bucket without server-side encryption, and your part size is 5MB. Calculate 3 MD5 checksums corresponding to each part, i.e. the checksum of the first 5MB, the second 5MB, and the last 4MB. Then take the checksum of their concatenation. MD5 checksums are often printed as hex representations of binary data, so make sure you take the MD5 of the decoded binary concatenation, not of the ASCII or UTF-8 encoded concatenation. When that's done, add a hyphen and the number of parts to get the ETag.
Here are the commands to do it on Mac OS X from the console:
$ dd bs=1m count=5 skip=0 if=someFile | md5 >>checksums.txt 5+0 records in 5+0 records out 5242880 bytes transferred in 0.019611 secs (267345449 bytes/sec) $ dd bs=1m count=5 skip=5 if=someFile | md5 >>checksums.txt 5+0 records in 5+0 records out 5242880 bytes transferred in 0.019182 secs (273323380 bytes/sec) $ dd bs=1m count=5 skip=10 if=someFile | md5 >>checksums.txt 2+1 records in 2+1 records out 2599812 bytes transferred in 0.011112 secs (233964895 bytes/sec)
At this point all the checksums are in checksums.txt
. To concatenate them and decode the hex and get the MD5 checksum of the lot, just use
$ xxd -r -p checksums.txt | md5
And now append "-3" to get the ETag, since there were 3 parts.
Notes
aws s3 cp
then you most likely have a 8MB chunksize. According to the docs, that is the default.Content-MD5
header and S3 will compare it for you.md5
on macOS just writes out the checksum, but md5sum
on Linux/brew also outputs the filename. You'll need to strip that, but I'm sure there's some option to only output the checksums. You don't need to worry about whitespace cause xxd
will ignore it.Code Links
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With