I am trying to sync a few large buckets on amazon S3.
When I run my S3cmd sync --recursive command I get a response saying "killed".
Does anyone know what this may refer to? Is there a limit on the number of files that can be synced in S3?
Thx for your help
Instead of using the Amazon S3 console, try uploading the file using the AWS Command Line Interface (AWS CLI) or an AWS SDK. Note: If you use the Amazon S3 console, the maximum file size for uploads is 160 GB. To upload a file that is larger than 160 GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.
Amazon S3 provides a faster, easier and flexible method to upload larger files, known as “multipart upload” feature. This feature allows you to break the larger objects into smaller chunks and upload a number of chunks in parallel. If any of the chunks fails to upload, you can restart it.
The size of an object in S3 can be from a minimum of 0 bytes to a maximum of 5 terabytes, so, if you are looking to upload an object larger than 5 gigabytes, you need to use either multipart upload or split the file into logical chunks of up to 5GB and upload them manually as regular uploads.
Upload an object in a single operation using the AWS SDKs, REST API, or AWS CLI—With a single PUT operation, you can upload a single object up to 5 GB in size. Upload a single object using the Amazon S3 Console—With the Amazon S3 Console, you can upload a single object up to 160 GB in size.
After reading around it looks like the program has memory consumption issues. In particular this can cause the OOM killer (out of memory killer) to take down the process and prevent the system from getting bogged down. A quick look at dmesg
after the process is killed will generally show if this is the case or not.
With that in mind I would ensure you're on the latest release, which notes memory consumption issues being solved in the release notes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With