I have a jenkins job which uploads a pretty small bash file(less than <1mb) to s3 bucket. It works most of the time but fails once in a while with the following error:
upload failed: build/xxxxxxx/test.sh The read operation timed out
Above error is directly from the aws cli operation. I am thinking, It could either be some network issue or maybe disk read operation is not available at the time. How do I set the option to retry it if this happens? Also, Is there a timeout I can increase? I searched the cli documentation, googled, and checked out 'aws s3api' but don't see any such an option.
If such an option does not exist.Then, How do folks get around this? Wrap the command to check the error code and reattempt?
The following cp command uploads a local file stream from standard input to a specified bucket and key: aws s3 cp - s3://mybucket/stream.txt. Uploading a local file stream that is larger than 50GB to S3. The following cp command uploads a 51GB local file stream from standard input to a specified bucket and key.
If you send the file to the existing key, it will overwrite that file once the upload is complete.
It's a best practice to use aws s3 commands (such as aws s3 cp) for multipart uploads and downloads, because these aws s3 commands automatically perform multipart uploading and downloading based on the file size.
End up writing wrapper around s3 command to retry and also get debug stack on last attempt. Might help folks.
# Purpose: Allow retry while uploading files to s3 bucket
# Params:
# \$1 : local file to copy to s3
# \$2 : s3 bucket path
# \$3 : AWS bucket region
#
function upload_to_s3 {
n=0
until [ \$n -gt 2 ]
do
if [ \$n -eq 2 ]; then
aws s3 cp --debug \$1 \$2 --region \$3
return \$?
else
aws s3 cp \$1 \$2 --region \$3 && break
fi
n=\$[\$n+1]
sleep 30
done
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With