In the example code in this article, how is the last segment of the stream working on the line:
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
I understand that the first part reading the file, the second is compressing it, but what is .pipe(res)
? which seems to do the job I'd usually do with res.send
or res.sendFile
.
Full code†:
const accepts = require('accepts')
const brotli = require('iltorb').compressStream
function onRequest (req, res) {
res.setHeader('Content-Type', 'text/html')
const fileName = req.params.fileName
const filePath = path.resolve(__dirname, 'files', fileName)
const encodings = new Set(accepts(req).encodings())
if (encodings.has('br')) {
res.setHeader('Content-Encoding', 'br')
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
}
}
const app = express()
app.use('/files/:fileName', onRequest)
localhost:5000/files/test.txt => Browser displays text contents of that file
† which I changed slightly to use express, and a few other minor stuff.
"How does simply piping the data to the response object render the data back to the client?"
The wording of "the response object" in the question could mean the asker is trying to understand why piping data from a stream to res
does anything. The misconception is that res
is just some object.
This is because all express Responses (res
) inherit from http.ServerResponse
(on this line), which is a writable Stream
. Thus, whenever data is written to res
, the written data is handled by http.ServerResponse
which internally sends the written data back to the client.
Internally, res.send
actually just writes to the underlying stream it represents (itself). res.sendFile
actually pipes the data read from the file to itself.
In case the act of "piping" data from one stream to another is unclear, see the section at the bottom.
If, instead, the flow of data from file to client isn't clear to the asker, then here's a separate explanation.
I'd say the first step to understanding this line is to break it up into smaller, more understandable fragments:
First, fs.createReadStream
is used to get a readable stream of a file's contents.
const fileStream = fs.createReadStream(filePath);
Next, a transform stream that transforms data into a compressed format is created and the data in the fileStream
is "piped" (passed) into it.
const compressionStream = brotli();
fileStream.pipe(compressionStream);
Finally, the data that passes through the compressionStream
(the transform stream) is piped into the response, which is also a writable stream.
compressionStream.pipe(res);
The process is quite simple when laid out visually:
Following the flow of data is now quite simple: the data first comes from a file, through a compressor, and finally to the response, which internally sends the data back to the client.
Wait, but how does the compression stream pipe into the response stream?
The answer is that pipe
returns the destination stream. That means when you do a.pipe(b)
, you'll get b
back from the method call.
Take the line a.pipe(b).pipe(c)
for example. First, a.pipe(b)
is evaluated, returning b
. Then, .pipe(c)
is called on the result of a.pipe(b)
, which is b
, thus being equivalent to b.pipe(c)
.
a.pipe(b).pipe(c);
// is the same as
a.pipe(b); // returns `b`
b.pipe(c);
// is the same as
(a.pipe(b)).pipe(c);
The wording "imply piping the data to the response object" in the question could also entail the asker doesn't understand the flow of the data, thinking that the data goes directly from a
to c
. Instead, the above should clarify that the data goes from a
to b
, then b
to c
; fileStream
to compressionStream
, then compressionStream
to res
.
If the whole process still makes no sense, it might be beneficial to rewrite the process without the concept of streams:
First, the data is read from the file.
const fileContents = fs.readFileSync(filePath);
The fileContents
are then compressed. This is done using some compress
function.
function compress(data) {
// ...
}
const compressedData = compress(fileContents);
Finally, the data is sent back to the client through the response res
.
res.send(compressedData);
The original line of code in the question and the above process are more or less the same, barring the inclusion of streams in the original.
The act of taking some data in from an outside source (fs.readFileSync
) is like a readable Stream
. The act of transforming the data (compress
) via a function is like a transform Stream
. The act of sending the data to an outside source (res.send
) is like a writable Stream
.
If you're confused about how streams work, here's a simple analogy: each type of stream can be thought of in the context of water (data) flowing down the side of a mountain from a lake on the top.
A convenient way of writing all data read from a readable stream directly to a writable stream is to just pipe
it, which is just directly connecting the lake to the people.
readable.pipe(writable); // easy & simple
This is in contrast to reading data from the readable stream, then manually writing it to the writable stream:
// "pipe" data from a `readable` stream to a `writable` one.
readable.on('data', (chunk) => {
writable.write(chunk);
});
readable.on('end', () => writable.end());
You might immediately question why Transform streams are the same as Duplex streams. The only difference between the two is how they're implemented.
Transform streams implement a _transform
function that's supposed to take in written data and return readable data, whereas a Duplex stream is simply both a Readable and Writable stream, thus having to implement _read
and _write
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With