I want to stream a multipart/form-data (large) file upload directly to AWS S3 with as little memory and file disk footprint as possible. How can I achieve this? Resources online only explain how to upload a file and store it locally on the server.
If you want to use S3 as mounted storage for personal computers, you have two possibilities. One is to use an EC2 server as NFS server, backed by S3. AWS provides a server called AWS Storage Gateway for doing this, but you will of course have to pay for running that server 24/7.
You can stream the file from Amazon S3 to the client through your server without downloading the file to your server, by opening a stream to the Amazon S3 file then read from it and write on the client stream (buffer by buffer). You can use Stream.
Amazon S3 service is used for file storage, where you can upload or remove files. We can trigger AWS Lambda on S3 when there are any file uploads in S3 buckets. AWS Lambda has a handler function which acts as a start point for AWS Lambda function. The handler has the details of the events.
You can use upload manager to stream the file and upload it, you can read comments in source code you can also configure params to set the part size, concurrency & max upload parts, below is a sample code for reference.
package main
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var filename = "file_name.zip"
var myBucket = "myBucket"
var myKey = "file_name.zip"
var accessKey = ""
var accessSecret = ""
func main() {
var awsConfig *aws.Config
if accessKey == "" || accessSecret == "" {
//load default credentials
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
}
} else {
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
Credentials: credentials.NewStaticCredentials(accessKey, accessSecret, ""),
}
}
// The session the S3 Uploader will use
sess := session.Must(session.NewSession(awsConfig))
// Create an uploader with the session and default options
//uploader := s3manager.NewUploader(sess)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
u.PartSize = 5 * 1024 * 1024 // The minimum/default allowed part size is 5MB
u.Concurrency = 2 // default is 5
})
//open the file
f, err := os.Open(filename)
if err != nil {
fmt.Printf("failed to open file %q, %v", filename, err)
return
}
//defer f.Close()
// Upload the file to S3.
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(myBucket),
Key: aws.String(myKey),
Body: f,
})
//in case it fails to upload
if err != nil {
fmt.Printf("failed to upload file, %v", err)
return
}
fmt.Printf("file uploaded to, %s\n", result.Location)
}
you can do this using minio-go :
n, err := s3Client.PutObject("bucket-name", "objectName", object, size, "application/octet-stream")
PutObject() automatically does multipart upload internally. Example
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With