I'm proxying an S3 call through my Node.js server and want to tweak just a couple of the returned XML values before proxying them. Except for those tweaks, I'd like to maintain the rest of each response, e.g. the response headers.
I can of course gather the whole response first, parse the XML, transform it, and return it back, but for large responses that'll be both slow and memory-intensive. Is there a way I can achieve basically a stream.pipe()
but with maybe a transformation function?
I've looked at sax-js, which can pipe but doesn't have any transform ability. Do I have to resort to listening to low-level parse events and generating and outputting the resulting XML myself?
I've also looked at libxmljs which has a "push parser" and a higher-level DOM API, but it looks like I'd again have to listen to low-level parse events myself, plus I'm not sure I can stream the resulting XML out as it's generated.
Is there any easier way than either of these two approaches? Thanks!
P.S. The XML tweaks are simple: just removing a substring from some text elements.
In this case you can collect all chunks together, like this:
var data='', tstream = new stream.Transform();
tstream._transform = function (chunk, encoding, done) {
data += chunk.toString();
done();
};
And do what you need at last call in _flush function:
tstream._flush = function(done){
data += 'hola muheres';
this.push(data);
done();
};
so all together it can look like:
req.pipe(anotherstream).pipe(tstream).pipe(response);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With