I try to get a huge list of files from AWS S3 bucket with this command:
aws s3 ls --human-readable --recursive my-directory
This directory contains tens of thousands of files, so sometimes, after long pause, I get this error:
('The read operation timed out',)
I've tried parameter --page-size
with different values, but it didn't help. How can I fix this error?
You can write a script to loop the sync of S3 bucket till transfer is done. The script would look like:
while:
do
aws s3 sync s3://bucket/path-to-files
done
This would retry if file transfer fails and basically resume downloading again and again till the transfer is complete.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With