I know Amazon S3 added the multi-part upload for huge files. That's great. What I also need is a similar functionality on the client side for customers who get part way through downloading a gigabyte plus file and have errors.
I realize browsers have some level of retry and resume built in, but when you're talking about huge files I'd like to be able to pick up where they left off regardless of the type of error out.
Any ideas?
Thanks, Brian
With S3 Browser you can download large files from Amazon S3 at the maximum speed possible, using your full bandwidth! This was made possible by a new feature called Multipart Downloads. Now S3 Browser breaks large files into smaller parts and downloads them in parallel, achieving significantly higher downloading speed.
Note: If you use the Amazon S3 console, the maximum file size for uploads is 160 GB. To upload a file that is larger than 160 GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.
If you have Visual Studio with the AWS Explorer extension installed, you can also browse to Amazon S3 (step 1), select your bucket (step 2), select al the files you want to download (step 3) and right click to download them all (step 4).
S3 supports the standard HTTP "Range" header if you want to build your own solution.
S3 Getting Objects
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With