Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ERR_CONNECTION_RESET when PUTting to S3

I am PUTting files to S3 via ajax requests and about 50% of the time I get ERR_CONNECTION_RESET errors.

I know the requests are signed correctly -- any ideas what may be causing this? Again, this is an intermittent problem that I see from multiple locations and machines.

Here is the relevant coffeescript code I am using to PUT my files to S3. It is derived from Micah Roberson's and Rok Krulec's work at http://micahroberson.com/upload-files-directly-to-s3-w-backbone-on-heroku/ and http://codeartists.com/post/36892733572/how-to-directly-upload-files-to-amazon-s3-from-your.

  createCORSRequest: (method, url) ->
    xhr = new XMLHttpRequest()

    if xhr.withCredentials?
      xhr.open method, url, true
    else if typeof XDomainRequest != "undefined"
      xhr = new XDomainRequest()
      xhr.open method, url
    else
      xhr = null

    xhr

  uploadToS3: (file, signature) ->
    this_s3upload = this
    this_s3upload.signature = signature
    url = signature.signed_request

    xhr = @createCORSRequest 'PUT', decodeURIComponent(signature.signed_request)

    if !xhr
      @onError 'CORS not supported'
    else
      xhr.onload = () ->
        if xhr.status == 200
          this_s3upload.onProgress 100, 'Upload completed.'
          this_s3upload.onFinishS3Put file, this_s3upload.signature
        else
          this_s3upload.onError file, 'Upload error: ' + xhr.status

      xhr.onerror = () ->
        this_s3upload.onError file, 'XHR error.', this_s3upload.signature

      xhr.upload.onprogress = (e) ->
        if e.lengthComputable
          percentLoaded = Math.round (e.loaded / e.total) * 100

          if percentLoaded == 100
            message = "Finalizing"
          else
            message = "Uploading"

          this_s3upload.onProgress xhr, file, percentLoaded, message, this_s3upload.signature

      xhr.onabort = ->
        this_s3upload.onAbort file, "XHR cancelled by user.", this_s3upload.signature

    xhr.setRequestHeader 'Content-Type', file.type
    xhr.setRequestHeader 'x-amz-acl', 'public-read'
    xhr.send file

Update

I've been getting very attentive support from Amazon on this issue. Per their suggestion, I created an EC2 Windows instance, loaded the Chrome browser on it, and attempted to upload 5 files 10 times with my code. I did not see the error once. I did see some SignatureDoesNotMatch errors occasionally, but not a single ERR_CONNECTION_RESET error. I am still seeing ERR_CONNECTION_RESET errors though on every non-EC2 client/network location I use.

Update Still no solution here. I have moved from using a self-rolled signing algorithm to one provided by boto. No impact on the ERR_CONNECTION_RESET issue though.

like image 276
Erik Avatar asked Apr 01 '14 23:04

Erik


People also ask

Why am I getting an access denied error message when I upload files to my Amazon S3 bucket?

If you're getting Access Denied errors on public read requests that are allowed, check the bucket's Amazon S3 block public access settings. Review the S3 Block Public Access settings at both the account and bucket level. These settings can override permissions that allow public read access.

How do I give access to AWS S3 bucket?

Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Buckets list, choose the name of the bucket that contains the object. In the objects list, choose the name of the object for which you want to set permissions. Choose Permissions.

Who has access to S3 bucket?

By default, all Amazon S3 buckets and objects are private. Only the resource owner which is the AWS account that created the bucket can access that bucket. The resource owner can, however, choose to grant access permissions to other resources and users.


1 Answers

I ran into this issue when uploading bigger files (long requests) with pre-signed URLs, following Heroku's example (node aws sdk) :

app.get('/sign-s3', (req, res) => {
  const s3 = new aws.S3();
  const fileName = req.query['file-name'];
  const fileType = req.query['file-type'];
  const s3Params = {
    Bucket: S3_BUCKET,
    Key: fileName,
    Expires: 60,
    ContentType: fileType,
    ACL: 'public-read'
  };

  s3.getSignedUrl('putObject', s3Params, (err, data) => {
    if(err){
      console.log(err);
      return res.end();
    }
    const returnData = {
      signedRequest: data,
      url: `https://${S3_BUCKET}.s3.amazonaws.com/${fileName}`
    };
    res.write(JSON.stringify(returnData));
    res.end();
  });
});

The "Expires" parameter makes the signed URL valid for 60 seconds.

I figured the request crashes when the signed URL expires in the middle of upload (even though it was valid when upload started).

It doesn't crash exactly after 60 seconds though, but randomly between 60 and 120 seconds. Most of the time, the client logs ERR_CONNECTION_RESET, and other times it logs 403 FORBIDDEN.

After cranking it up to 3600 I had no issues anymore.

I suspect the issue didn't happen on EC2s because they have very fast upload speeds.

like image 115
Chatouille Avatar answered Sep 17 '22 17:09

Chatouille