Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Amazon S3 CORS PUT fails

I'm trying to upload a large file (1.5GB) to Amazon S3 by using the REST Api and HTML5 file slicing. Here's how the upload code looks like (code stripped down for readability):

File.prototype.slice = File.prototype.webkitSlice || File.prototype.mozSlice || File.prototype.slice;

var length = u.settings.chunk_size; // 6MB
var start = chunk * length;
var end = Math.min(start + length, u.file.size);

var xhr = new XMLHttpRequest();
var path = "/" + u.settings.key;

path += "?partNumber=" + chunk + "&uploadId=" + u.upload_id;

var method = "PUT";
var authorization = "AWS " + u.settings.access_key + ":" + signature;
var blob = u.file.slice(start, end);

xhr.upload.addEventListener("progress", progress_handler, true);
xhr.addEventListener("readystatechange", handler, true);
xhr.addEventListener("error", error_handler, true);
xhr.addEventListener("timeout", error_handler, true);

xhr.open(method, u.settings.host + path, true);

xhr.setRequestHeader("x-amz-date", date);
xhr.setRequestHeader("Authorization", authorization);
xhr.setRequestHeader("Content-Type", u.settings.content_type);
xhr.setRequestHeader("Content-Disposition", "attachment; filename=" + u.file.name);

xhr.send(blob);

chunk_size is 6MB. After a chunk finishes uploading, the next one follows, and so on. But sometimes (every 80 chunks or so), the PUT request fails, with e.type == "error", e.target.status == 0 (which surprises me), and e.target.responseText == "". After a chunk fails, the code re-attempts to upload it, and gets the exact same error. When I refresh the page and continue the upload (the same chunk!), it works like a charm (for 80 chunks or so, when it gets stuck again). Here's how the request looks in chrome dev tools:

enter image description hereenter image description hereenter image description here

Any ideas why this might happen, or how to debug something like this?

EDIT: Here is the OPTIONS response:

enter image description here

like image 573
Gabi Purcaru Avatar asked Sep 27 '12 13:09

Gabi Purcaru


1 Answers

I finally found the issue by sniffing packets: there are two issues:

  1. for PUT requests that get a 4xx (didn't test for other non-2xx responses), the xhr request returns as aborted (status = 0); still haven't found an explanation for that, check out Why does a PUT 403 show up as Aborted?

  2. Amazon S3 responded with a 403 that said RequestTimeTooSkewed, because my signatures are generated when the upload starts, and after 15 minutes (the timeout that triggers the RequestTimeTooSkewed error), it fails, and the signatures have to be regenerated. That 403 error is never seen in the dev tools console or by the js code, because of the first problem..

After regenerating the signatures, everything works like a charm.

like image 160
Gabi Purcaru Avatar answered Sep 19 '22 22:09

Gabi Purcaru