I have a directory of M4V files (each around 1 GB) on my machine that I want to upload to my S3 bucket. I decided to try the AWS CLI so I can execute a command and let my computer do the rest, but it doesn’t seem to work.
The command I’m issuing is:
aws s3 cp . s3://yourfightsite-vod/videos/output/m4v --recursive --acl private
But running this command returns output like the following:
upload failed: ./54cffd1ad106d.m4v to s3://yourfightsite-vod/videos/output/m4v/54cffd1ad106d.m4v HTTPSConnectionPool(host='yourfightsite-vod.s3.amazonaws.com', port=443): Max retries exceeded with url: /videos/output/m4v/54cffd1ad106d.m4v?partNumber=4&uploadId=oG.0CBqIpsRcxO.ZqLIgOOBi8g9JFOKD8wQrmrNFa6Cx9LvGY9_PXiqaaVm6X3fIzXbCor8QSMEeqCfovtivHNFVyea8UNoxrVTpTEvM3ibGBxF30HGPkrxWuA83k6gj (Caused by : Errno 32 Broken pipe)
What does this mean? What is a “broken pipe” and how can I rectify this so my uploads are successful?
Use the s3 cp command to copy objects from a bucket or a local directory.
If you're using the AWS Command Line Interface (AWS CLI), then all high-level aws s3 commands automatically perform a multipart upload when the object is large. These high-level commands include aws s3 cp and aws s3 sync.
What is a “broken pipe” and how can I rectify this so my uploads are successful?
"Broken Pipe" means you've lost your connection. It could be a problem on Amazon's side, could be a problem on your side... who knows.... the point is that you were communicating, and now you are not.
Best resolution is to use multi-part uploads. In their own documentation, Amazon recommend that you use multi-part uploads for large files over 100MB. It looks like the CLI tool might be using this already.
The second half of the resolution is for your code to catch and handle errors such as this gracefully (i.e. retry a couple of times and then ring alarm bells).
Just sharing my experience! I know it's a bit late, but yesterday I found myself facing the same problem even after a whole year running the same configuration without any problem. Some files would get uploaded where others were not. There was no pattern... Using the --debug option, for those files that missed the upload, there was a warning about the wrong region. So, I changed the region in my s3cmd configuration and it was fixed!
In my case it was a new line character (\n) in AWS bucket URL.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With