Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

EntityTooSmall in CompleteMultipartUploadResponse

using .NET SDK v.1.5.21.0

I'm trying to upload a large file (63Mb) and I'm following the example at:

http://docs.aws.amazon.com/AmazonS3/latest/dev/LLuploadFileDotNet.html

But using a helper instead the hole code and using jQuery File Upload

https://github.com/blueimp/jQuery-File-Upload/blob/master/basic-plus.html

what I have is:

string bucket = "mybucket";

long totalSize = long.Parse(context.Request.Headers["X-File-Size"]),
        maxChunkSize = long.Parse(context.Request.Headers["X-File-MaxChunkSize"]),
        uploadedBytes = long.Parse(context.Request.Headers["X-File-UloadedBytes"]),
        partNumber = uploadedBytes / maxChunkSize + 1,
        fileSize = partNumber * inputStream.Length;

bool lastPart = inputStream.Length < maxChunkSize;

// http://docs.aws.amazon.com/AmazonS3/latest/dev/LLuploadFileDotNet.html
if (partNumber == 1) // initialize upload
{
    iView.Utilities.Amazon_S3.S3MultipartUpload.InitializePartToCloud(fileName, bucket);
}

try
{
    // upload part
    iView.Utilities.Amazon_S3.S3MultipartUpload.UploadPartToCloud(fs, fileName, bucket, (int)partNumber, uploadedBytes, maxChunkSize);

    if (lastPart)
        // wrap it up and go home
        iView.Utilities.Amazon_S3.S3MultipartUpload.CompletePartToCloud(fileName, bucket);

}
catch (System.Exception ex)
{
    // Huston, we have a problem!
    //Console.WriteLine("Exception occurred: {0}", exception.Message);
    iView.Utilities.Amazon_S3.S3MultipartUpload.AbortPartToCloud(fileName, bucket);
}

and

public static class S3MultipartUpload
{
    private static string accessKey = System.Configuration.ConfigurationManager.AppSettings["AWSAccessKey"];
    private static string secretAccessKey = System.Configuration.ConfigurationManager.AppSettings["AWSSecretKey"];
    private static AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKey, secretAccessKey);
    public static InitiateMultipartUploadResponse initResponse;
    public static List<UploadPartResponse> uploadResponses;

    public static void InitializePartToCloud(string destinationFilename, string destinationBucket)
    {
        // 1. Initialize.
        uploadResponses = new List<UploadPartResponse>();

        InitiateMultipartUploadRequest initRequest =
            new InitiateMultipartUploadRequest()
            .WithBucketName(destinationBucket)
            .WithKey(destinationFilename.TrimStart('/'));

        initResponse = client.InitiateMultipartUpload(initRequest);
    }
    public static void UploadPartToCloud(Stream fileStream, string destinationFilename, string destinationBucket, int partNumber, long uploadedBytes, long maxChunkedBytes)
    {
        // 2. Upload Parts.
        UploadPartRequest request = new UploadPartRequest()
            .WithBucketName(destinationBucket)
            .WithKey(destinationFilename.TrimStart('/'))
            .WithUploadId(initResponse.UploadId)
            .WithPartNumber(partNumber)
            .WithPartSize(maxChunkedBytes)
            .WithFilePosition(uploadedBytes)
            .WithInputStream(fileStream) as UploadPartRequest;

        uploadResponses.Add(client.UploadPart(request));
    }
    public static void CompletePartToCloud(string destinationFilename, string destinationBucket)
    {
        // Step 3: complete.
        CompleteMultipartUploadRequest compRequest =
            new CompleteMultipartUploadRequest()
            .WithBucketName(destinationBucket)
            .WithKey(destinationFilename.TrimStart('/'))
            .WithUploadId(initResponse.UploadId)
            .WithPartETags(uploadResponses);

        CompleteMultipartUploadResponse completeUploadResponse =
            client.CompleteMultipartUpload(compRequest);
    }
    public static void AbortPartToCloud(string destinationFilename, string destinationBucket)
    {
        // abort.
        client.AbortMultipartUpload(new AbortMultipartUploadRequest()
                .WithBucketName(destinationBucket)
                .WithKey(destinationFilename.TrimStart('/'))
                .WithUploadId(initResponse.UploadId));
    }
}

my maxChunckedSize is 6Mb (6 * (1024*1024)) as I have read that the minimum is 5Mb...

why am I getting "Your proposed upload is smaller than the minimum allowed size" exception? What am I doing wrong?

The error is:

<Error>
  <Code>EntityTooSmall</Code>
  <Message>Your proposed upload is smaller than the minimum allowed size</Message>
  <ETag>d41d8cd98f00b204e9800998ecf8427e</ETag>
  <MinSizeAllowed>5242880</MinSizeAllowed>
  <ProposedSize>0</ProposedSize>
  <RequestId>C70E7A23C87CE5FC</RequestId>
  <HostId>pmhuMXdRBSaCDxsQTHzucV5eUNcDORvKY0L4ZLMRBz7Ch1DeMh7BtQ6mmfBCLPM2</HostId>
  <PartNumber>1</PartNumber>
</Error>

How can I get ProposedSize if I'm passing the stream and stream length?

like image 794
balexandre Avatar asked May 30 '13 07:05

balexandre


People also ask

What is multi Part upload?

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts.

What is prefix in AWS S3?

You can use prefixes to organize the data that you store in Amazon S3 buckets. A prefix is a string of characters at the beginning of the object key name. A prefix can be any length, subject to the maximum length of the object key name (1,024 bytes).

What is the maximum size of each part in a multipart upload?

The minimum allowable part size is 1 MB, and the maximum is 4 GB. Every part you upload using this upload ID, except the last one, must have the same size. The last one can be the same size or smaller.

What is S3 ObjectCreated CompleteMultipartUpload?

s3:ObjectCreated:CompleteMultipartUpload – An object was created by the completion of a S3 multi-part upload. s3:ObjectCreated:* – An object was created by one of the event types listed above or by a similar object creation event added in the future.


1 Answers

Here is a working solution for the latest Amazon SDK (as today: v.1.5.37.0)

Amazon S3 Multipart Upload works like:

  1. Initialize the request using client.InitiateMultipartUpload(initRequest)
  2. Send chunks of the file (loop until the end) using client.UploadPart(request)
  3. Complete the request using client.CompleteMultipartUpload(compRequest)
  4. If anything goes wrong, remember to dispose the client and request, as well fire the abort command using client.AbortMultipartUpload(abortMultipartUploadRequest)

I keep the client in Session as we need this for each chunk upload as well, keep an hold of the ETags that are now used to complete the process.


You can see an example and simple way of doing this in Amazon Docs itself, I ended up having a class to do everything, plus, I have integrated with the lovely jQuery File Upload plugin (Handler code below as well).

The S3MultipartUpload is as follow

public class S3MultipartUpload : IDisposable
{
    string accessKey = System.Configuration.ConfigurationManager.AppSettings.Get("AWSAccessKey");
    string secretAccessKey = System.Configuration.ConfigurationManager.AppSettings.Get("AWSSecretKey");

    AmazonS3 client;
    public string OriginalFilename { get; set; }
    public string DestinationFilename { get; set; }
    public string DestinationBucket { get; set; }

    public InitiateMultipartUploadResponse initResponse;
    public List<PartETag> uploadPartETags;
    public string UploadId { get; private set; }

    public S3MultipartUpload(string destinationFilename, string destinationBucket)
    {
        if (client == null)
        {
            System.Net.WebRequest.DefaultWebProxy = null; // disable proxy to make upload quicker

            client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKey, secretAccessKey, new AmazonS3Config()
            {
                RegionEndpoint = Amazon.RegionEndpoint.EUWest1,
                CommunicationProtocol = Protocol.HTTP
            });

            this.OriginalFilename = destinationFilename.TrimStart('/');
            this.DestinationFilename = string.Format("{0:yyyy}{0:MM}{0:dd}{0:HH}{0:mm}{0:ss}{0:fffff}_{1}", DateTime.UtcNow, this.OriginalFilename);
            this.DestinationBucket = destinationBucket;

            this.InitializePartToCloud();
        }
    }

    private void InitializePartToCloud()
    {
        // 1. Initialize.
        uploadPartETags = new List<PartETag>();

        InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest();
        initRequest.BucketName = this.DestinationBucket;
        initRequest.Key = this.DestinationFilename;

        // make it public
        initRequest.AddHeader("x-amz-acl", "public-read");

        initResponse = client.InitiateMultipartUpload(initRequest);
    }
    public void UploadPartToCloud(Stream fileStream, long uploadedBytes, long maxChunkedBytes)
    {
        int partNumber = uploadPartETags.Count() + 1; // current part

        // 2. Upload Parts.
        UploadPartRequest request = new UploadPartRequest();
        request.BucketName = this.DestinationBucket;
        request.Key = this.DestinationFilename;
        request.UploadId = initResponse.UploadId;
        request.PartNumber = partNumber;
        request.PartSize = fileStream.Length;
        //request.FilePosition = uploadedBytes // remove this line?
        request.InputStream = fileStream; // as UploadPartRequest;

        var up = client.UploadPart(request);
        uploadPartETags.Add(new PartETag() { ETag = up.ETag, PartNumber = partNumber });
    }
    public string CompletePartToCloud()
    {
        // Step 3: complete.
        CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest();
        compRequest.BucketName = this.DestinationBucket;
        compRequest.Key = this.DestinationFilename;
        compRequest.UploadId = initResponse.UploadId;
        compRequest.PartETags = uploadPartETags;

        string r = "Something went badly wrong";

        using (CompleteMultipartUploadResponse completeUploadResponse = client.CompleteMultipartUpload(compRequest))
            r = completeUploadResponse.ResponseXml;

        return r;
    }
    public void AbortPartToCloud()
    {
        // abort.
        client.AbortMultipartUpload(new AbortMultipartUploadRequest()
        {
            BucketName = this.DestinationBucket,
            Key = this.DestinationFilename,
            UploadId = initResponse.UploadId
        });
    }

    public void Dispose()
    {
        if (client != null) client.Dispose();
        if (initResponse != null) initResponse.Dispose();
    }
}

I use DestinationFilename as the destination file so I can avoid the same name, but I keep the OriginalFilename as I needed later.

Using jQuery File Upload Plugin, all works inside a Generic Handler, and the process is something like this:

// Upload partial file
private void UploadPartialFile(string fileName, HttpContext context, List<FilesStatus> statuses)
{
    if (context.Request.Files.Count != 1)
        throw new HttpRequestValidationException("Attempt to upload chunked file containing more than one fragment per request");

    var inputStream = context.Request.Files[0].InputStream;
    string contentRange = context.Request.Headers["Content-Range"]; // "bytes 0-6291455/14130271"

    int fileSize = int.Parse(contentRange.Split('/')[1]);,
        maxChunkSize = int.Parse(context.Request.Headers["X-Max-Chunk-Size"]),
        uploadedBytes = int.Parse(contentRange.Replace("bytes ", "").Split('-')[0]);

    iView.Utilities.AWS.S3MultipartUpload s3Upload = null;

    try
    {

        // ######################################################################################
        // 1. Initialize Amazon S3 Client
        if (uploadedBytes == 0)
        {
            HttpContext.Current.Session["s3-upload"] = new iView.Utilities.AWS.S3MultipartUpload(fileName, awsBucket);

            s3Upload = (iView.Utilities.AWS.S3MultipartUpload)HttpContext.Current.Session["s3-upload"];
            string msg = System.String.Format("Upload started: {0} ({1:N0}Mb)", s3Upload.DestinationFilename, (fileSize / 1024));
            this.Log(msg);
        }

        // cast current session object
        if (s3Upload == null)
            s3Upload = (iView.Utilities.AWS.S3MultipartUpload)HttpContext.Current.Session["s3-upload"];

        // ######################################################################################
        // 2. Send Chunks
        s3Upload.UploadPartToCloud(inputStream, uploadedBytes, maxChunkSize);

        // ######################################################################################
        // 3. Complete Upload
        if (uploadedBytes + maxChunkSize > fileSize)
        {
            string completeRequest = s3Upload.CompletePartToCloud();
            this.Log(completeRequest); // log S3 response

            s3Upload.Dispose(); // dispose all objects
            HttpContext.Current.Session["s3-upload"] = null; // we don't need this anymore
        }

    }
    catch (System.Exception ex)
    {
        if (ex.InnerException != null)
            while (ex.InnerException != null)
                ex = ex.InnerException;

        this.Log(string.Format("{0}\n\n{1}", ex.Message, ex.StackTrace)); // log error

        s3Upload.AbortPartToCloud(); // abort current upload
        s3Upload.Dispose(); // dispose all objects

        statuses.Add(new FilesStatus(ex.Message));
        return;
    }

    statuses.Add(new FilesStatus(s3Upload.DestinationFilename, fileSize, ""));
}

Keep in mind that to have a Session object inside a Generic Handler, you need to implement IRequiresSessionState so your handler will look like:

public class UploadHandlerSimple : IHttpHandler, IRequiresSessionState

Inside fileupload.js (under _initXHRData) I have added an extra header called X-Max-Chunk-Size so I can pass this to Amazon and calculate if it's the last part of the uploaded file.


Fell free to comment and make smart edits for everyone to use.

like image 76
balexandre Avatar answered Sep 28 '22 07:09

balexandre