1.When I am using AmazonS3Client to upload file on amazon s3 file store. 2.when I am trying to upload multiple files at a time it gives exceptions: but same file multiple threads. I tried out client configure such as : 1.connectionTimeout=50000 in ms 2.maxConnections=500 3.socketTimeout=50000 in ms
Exception stacktrace:
com.amazonaws.AmazonClientException: Data read has a different length than the expected: dataLength=8192; expectedLength=79352; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ResettableInputStream; markedSupported=true; marked=0; resetSinceLastMarked=false; markCount=1; resetCount=0
at com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:150)
at com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:110)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:73)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:151)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:73)
at org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:98)
at com.amazonaws.http.RepeatableInputStreamRequestEntity.writeTo(RepeatableInputStreamRequestEntity.java:153)
at org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
at org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
at org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
at org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doSendRequest(SdkHttpRequestExecutor.java:47)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:713)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:518)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:647)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:441)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:292)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3655)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1424)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:135)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:127)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:129)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)**
This answer was wrote from the guy of AWS Hanson:
Is it possible that the input stream that is specified in the request has already been fully read?
If the input stream is a file stream, have you tried specifying the original file in the request instead of the input stream of the file?
Improving @iucasddaniel answer with sample code.
AmazonS3Client putObject: No content length specified for stream data. Stream contents will be buffered in memory and could result in out of memory errors.
Solution « Specify Object Metadata content Length
File tempFile = "D://Test.mp4";
String bucketName = "YashFiles", filePath = "local/mp4/";
FileInputStream sampleStream = new FileInputStream( tempFile );
byte[] byteArray = IOUtils.toByteArray( sampleStream );
Long contentLength = Long.valueOf(byteArray.length);
sampleStream.close();
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(contentLength);
TransferManager tm = new TransferManager(credentials);
FileInputStream stream = new FileInputStream( tempFile );
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, filePath, stream,objectMetadata);
Upload myUpload = tm.upload(putObjectRequest);
if (myUpload.isDone() == false) {
System.out.println("Transfer: "+ myUpload.getDescription());
System.out.println(" - State: "+ myUpload.getState());
System.out.println(" - Progress: "+ myUpload.getProgress().getBytesTransferred());
}
myUpload.waitForCompletion();
tm.shutdownNow();
stream.close();
org.apache.commons.io.FileUtils.forceDelete( tempFile );
Amazon S3: Checking Key Exists and generating PresignedUrl
I saw that error message when I was trying to do a S3.putObject(MyObject);
I had to update objectMetadata.setContentLength( [length of your content] );
For example:
String dataset= "Some value you want to add to S3 Bucket";
ObjectMetadata objectMetadata= new ObjectMetadata();
InputStream content= new ByteArrayInputStream(dataset.getBytes("UTF-8"));
objectMetadata.setContentLength(content.available());
objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYTION);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With