Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

AWS S3 large file reverse proxying with golang's http.ResponseWriter

I have a request handler named Download which I want to access a large file from Amazon S3 and push it to the user's browser. My goals are:

  • To record some request information before granting the user access to the file
  • To not buffer the file into memory too much. Files may become too large.

Here is what I've explored so far:

func Download(w http.ResponseWriter, r *http.Request) {

    sess := session.New(&aws.Config{
        Region:             aws.String("eu-west-1"),
        Endpoint:           aws.String("s3-eu-west-1.amazonaws.com"),
        S3ForcePathStyle:   aws.Bool(true),
        Credentials:        cred,
    })

    downloader := s3manager.NewDownloader(sess)
    // I can't write directly into the ResponseWriter. It doesn't implement WriteAt. 
    // Besides, it doesn't seem like the right thing to do.
    _, err := downloader.Download(w, &s3.GetObjectInput{
        Bucket: aws.String(BUCKET),
        Key: aws.String(filename),
    })
    if err != nil {
        log.Error(4, err.Error())
        return
    }

}

I'm wondering if there isn't a better approach (given the goals I'm trying to achieve).

Any suggestions are welcome. Thank you in advance :-)

like image 654
Sthe Avatar asked Dec 03 '22 15:12

Sthe


2 Answers

If you do want to stream the file through your service (rather than download directly as recommended in the accepted answer) -

import (
    ...

    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/service/s3"
)

func StreamDownloadHandler(w http.ResponseWriter, r *http.Request) {

    sess, awsSessErr := session.NewSession(&aws.Config{
        Region:      aws.String("eu-west-1"),
        Credentials: credentials.NewStaticCredentials("my-aws-id", "my-aws-secret", ""),
    })
    if awsSessErr != nil {
        http.Error(w, fmt.Sprintf("Error creating aws session %s", awsSessErr.Error()), http.StatusInternalServerError)
        return
    }

    result, err := s3.New(sess).GetObject(&s3.GetObjectInput{
        Bucket: aws.String("my-bucket"),
        Key:    aws.String("my-file-id"),
    })
    if err != nil {
        http.Error(w, fmt.Sprintf("Error getting file from s3 %s", err.Error()), http.StatusInternalServerError)
        return
    }

    w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", "my-file.csv"))
    w.Header().Set("Cache-Control", "no-store")

    bytesWritten, copyErr := io.Copy(w, result.Body)
    if copyErr != nil {
        http.Error(w, fmt.Sprintf("Error copying file to the http response %s", copyErr.Error()), http.StatusInternalServerError)
        return
    }
    log.Printf("Download of \"%s\" complete. Wrote %s bytes", "my-file.csv", strconv.FormatInt(bytesWritten, 10))
}
like image 61
Aidan Ewen Avatar answered Dec 05 '22 04:12

Aidan Ewen


If the file is potentially large, you don't want it to go trough your own server. The best approach (in my opinion) is to have the user download it directly from S3.

You can do this by generating a presigned url:

func Download(w http.ResponseWriter, r *http.Request) {

    ...

    sess := session.New(&aws.Config{
        Region:             aws.String("eu-west-1"),
        Endpoint:           aws.String("s3-eu-west-1.amazonaws.com"),
        S3ForcePathStyle:   aws.Bool(true),
        Credentials:        cred,
    })

    s3svc := s3.New(sess)
    req, _ := s3svc.GetObjectRequest(&s3.GetObjectInput{
        Bucket: aws.String(BUCKET),
        Key: aws.String(filename),
    })

    url, err := req.Presign(5 * time.Minute)
    if err != nil {
        //handle error
    }

    http.Redirect(w, r, url, http.StatusTemporaryRedirect)
}

The presigned url is only valid for a limited time (5 minutes in this example, adjust to your needs) and takes the user directly to S3. No need to worry about downloads anymore!

like image 25
fl0cke Avatar answered Dec 05 '22 04:12

fl0cke