Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to pipe an archive (zip) to an S3 bucket

I’m a bit confused with how to proceed. I am using Archive ( node js module) as a means to write data to a zip file. Currently, I have my code working when I write to a file (local storage).

var fs = require('fs');
var archiver = require('archiver');

var output = fs.createWriteStream(__dirname + '/example.zip');
var archive = archiver('zip', {
     zlib: { level: 9 }  
});

archive.pipe(output);
archive.append(mybuffer, {name: ‘msg001.txt’});

I’d like to modify the code so that the archive target file is an AWS S3 bucket. Looking at the code examples, I can specify the bucket name and key (and body) when I create the bucket object as in:

var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myMsgArchive.zip' Body: myStream};
s3.upload( params, function(err,data){
    … 
});

Or 

s3 = new AWS.S3({ parms: {Bucket: ‘myBucket’ Key: ‘myMsgArchive.zip’}});
s3.upload( {Body: myStream})
    .send(function(err,data) {
    …
    });

With regards to my S3 example(s), myStream appears to be a readable stream and I am confused as how to make this work as archive.pipe requires a writeable stream. Is this something where we need to use a pass-through stream? I’ve found an example where someone created a pass-through stream but the example is too terse to gain proper understanding. The specific example I am referring to is:

Pipe a stream to s3.upload()

Any help someone can give me would greatly be appreciated. Thanks.

like image 357
Peter Hiross Avatar asked Aug 20 '18 21:08

Peter Hiross


1 Answers

This could be useful for anyone else wondering how to use pipe.

Since you correctly referenced the example using the pass-through stream, here's my working code:

1 - The routine itself, zipping files with node-archiver

exports.downloadFromS3AndZipToS3 = () => {
  // These are my input files I'm willing to read from S3 to ZIP them

  const files = [
    `${s3Folder}/myFile.pdf`,
    `${s3Folder}/anotherFile.xml`
  ]

  // Just in case you like to rename them as they have a different name in the final ZIP

  const fileNames = [
    'finalPDFName.pdf',
    'finalXMLName.xml'
  ]

  // Use promises to get them all

  const promises = []

  files.map((file) => {
    promises.push(s3client.getObject({
      Bucket: yourBubucket,
      Key: file
    }).promise())
  })

  // Define the ZIP target archive

  let archive = archiver('zip', {
    zlib: { level: 9 } // Sets the compression level.
  })

  // Pipe!

  archive.pipe(uploadFromStream(s3client, 'someDestinationFolderPathOnS3', 'zipFileName.zip'))

  archive.on('warning', function(err) {
    if (err.code === 'ENOENT') {
      // log warning
    } else {
      // throw error
      throw err;
    }
  })

  // Good practice to catch this error explicitly
  archive.on('error', function(err) {
    throw err;
  })

  // The actual archive is populated here 

  return Promise
    .all(promises)
    .then((data) => {
      data.map((thisFile, index) => {
        archive.append(thisFile.Body, { name: fileNames[index] })
      })

      archive.finalize()
    })
  }

2 - The helper method

const uploadFromStream = (s3client) => {
  const pass = new stream.PassThrough()

  const s3params = {
    Bucket: yourBucket,
    Key: `${someFolder}/${aFilename}`,
    Body: pass,
    ContentType: 'application/zip'
  }

  s3client.upload(s3params, (err, data) => {
    if (err)
      console.log(err)

    if (data)
      console.log('Success')
  })

  return pass
}
like image 112
Carlo Mallone Avatar answered Sep 28 '22 19:09

Carlo Mallone