I have a zip file loaded into memory (do not have it persisted on disk). The zip file contains jpg images. I am trying to upload each jpg into s3 but am getting an error.
# already have an opened zipfile stored in zip_file
# already connected to s3
files = zip_file.namelist()
for f in files:
im = io.BytesIO(zip_file.read(f))
s3_key.key = f
s3_key.set_contents_from_stream(im)
I get the following error:
BotoClientError: BotoClientError: s3 does not support chunked transfer
What am I doing wrong?
Here is the solution. I was over thinking the problem.
files = zip_file.namelist()
for f in files:
data = zip_file.read(f)
s3_key._key.key = f
s3_key._key.set_contents_from_string(data)
That's all it took.
Boto supports other storage services, such as Google Cloud Storage, in addition to S3. The set_contents_from_stream method only works for services that support chunked transfer (see https://codereview.appspot.com/4515170). S3 does not support that (See their Technical FAQs at http://aws.amazon.com/articles/1109.)
It's unfortunate, but you can't upload from a stream to S3.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With