Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PUT to S3 with presigned url gives 403 error

I'm using Node to get an presignedRUL for S3 in order to PUT an image to an S3 bucket.

var aws = require('aws-sdk');
// Request presigned URL from S3
exports.S3presignedURL = function (req, res) {
  var s3 = new aws.S3();
  var params = {
    Bucket: process.env.S3_BUCKET, 
    Key: '123456', //TODO: creat unique S3 key
    //ACL:'public-read',
    ContentType: req.body['Content-Type'], //'image/jpg'
  };
  s3.getSignedUrl('putObject', params, function(err, url) {
      if(err) console.log(err);
      res.json({url: url});
  });
};

This successfully retrieves a presigned url of form...

https://[my-bucket-name].s3.amazonaws.com/1233456?AWSAccessKeyId=[My-ID]&Expires=1517063526&Signature=95eA00KkJnJAgxdzNNafGJ6GRLc%3D (Do I have to include an expires header?)

Back on the client side (web app) I use angular to generate an HTTP request. I have used both $http and ngFileUpload, with similar lack of success. Here is my ngFileUpload code.

Upload.upload({
    url: responce.data.url, //S3 upload url including bucket name
    method: 'PUT',
    'Content-Type': file.type, //I have tried putting the ContentTyep header all over
    headers: { 
        //'x-amz-acl':'public-read',
        'Content-Type': file.type, 
    }, 
    data: { 
        file: file,
        headers:{'Content-Type': file.type,}
    },                         
})

However, seemingly regardless of how I format my header I always get a 403 error. In the XML of the error it says,

SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.

I don't think CORS is an issue. Originally I was getting some CORS errors but they looked different and I got them to go away with some changes to the S3 bucket CORS settings. I've tried a lot of trial and error setting of the headers for both the request for the presignedURL and PUT request to S3, but I can't seem to find the right combo.

I did notice that when I console.log the 403 response error, the field

config.headers:{Content-Type: undefined, __setXHR_: ƒ, Accept: "application/json, text/plain, */*"}

Is this saying that the Content-Type head isn't set? How can that be when I've set that header everywhere I can think possible? Anyways, been banging my head against the wall of this for a bit...


EDIT: as requested, my Current CORS. (I threw everything in to get rid of the CORS warnings I had earlier. I will pare it down to the essentials only after I get my uploads working.)

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedOrigin>http://localhost:9500</AllowedOrigin>
    <AllowedOrigin>https://localhost:9500</AllowedOrigin>
    <AllowedOrigin>http://www.example.com</AllowedOrigin>
    <AllowedOrigin>https://www.example.com</AllowedOrigin>
    <AllowedOrigin>http://lvh.me:9500</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>ETag</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
    <AllowedHeader>Content-Type</AllowedHeader>
    <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
like image 879
honkskillet Avatar asked Jan 27 '18 14:01

honkskillet


People also ask

Why is S3 object URL Access Denied?

The URL to the Amazon S3 object doesn't include your user credentials, so the request to the object is anonymous. Amazon S3 returns an Access Denied error for anonymous requests to objects that aren't public.

How do I fix an AWS S3 bucket policy and Public permissions access denied error?

If you're denied permissions, then use another IAM identity that has bucket access, and edit the bucket policy. Or, delete and recreate the bucket policy if no one has access to it. If you're trying to add a public read policy, then disable the bucket's S3 Block Public Access.


1 Answers

Faced the same issue. Found out that the content-type that I used to create the pre-signed URL was not matching the content-type of the object I was sending to S3. I would suggest you add Expiration header when creating the pre-signed URL (I did too) and check in the console exactly what the content-type is being sent when you do a put to S3. Also, the data just needs to be the file, and not the struct you've created there.

like image 193
Adheer Araokar Avatar answered Sep 22 '22 01:09

Adheer Araokar