I want to upload (stream) control the writing process. But the writing process always returns false. Large files upload process stops. The output of the code is as follows;
> Node app.js
> False
> False
> False
> False
What am I doing wrong?
My code;
app.js
var http = require('http');
var fs = require('fs');
http.createServer(function(req, res){
var readable = fs.createReadStream('read.mkv');
var writable = fs.createWriteStream('write.mkv');
readable.on('data', function(chunk){
var buffer = writable.write(chunk);
if(!buffer){ // ----> Always false! Why????
readable.pause();
}
console.log(buffer);
});
writable.on('drain', function(){
readable.resume();
});
}).listen(8090);
To write data to a writable stream you need to call write() on the stream instance. Like in the following example: var fs = require('fs'); var readableStream = fs. createReadStream('file1.
The highWaterMark option gives you some control on the amount of "buffer memory" used. Once you've written more than the amount specified, write will return false to give you an opportunity to stop writing.
libuv is a multi-platform C library that provides support for asynchronous I/O based on event loops. It supports epoll(4) , kqueue(2) , Windows IOCP, and Solaris event ports. It is primarily designed for use in Node. js but it is also used by other software projects.
This exception can be solved by increasing the default memory allocated to our program to the required memory by using the following command. Parameters: SPACE_REQD: Pass the increased memory space (in Megabytes).
I have modified your program to show more information about what is happening:
'use strict';
const fs = require('fs');
const readable = fs.createReadStream('read.mkv');
const writable = fs.createWriteStream('write.mkv');
readable.on('data', function(chunk){
var buffer = writable.write(chunk);
if(!buffer){ // ----> Always false! Why????
readable.pause();
}
console.log(buffer, chunk.length);
});
writable.on('drain', function(){
readable.resume();
console.log('drain');
});
Output:
$ node blah.js
false 65536
drain
false 65536
drain
false 65536
drain
true 8192
I also used a different sized file as the input, so I have one true
at the end of my output. If I increased my read.mkv
’s size by, e.g., 10000
bytes, the last line would read false 18192
.
What is happening is that each chunk returned by read()
is large enough that it causes the write stream to exceed its highWaterMark
which defaults to 16384
(assuming that the stream returned by fs.createWriteStream
). As you can see from the numbers in the output, each read()
(err, each 'data'
event) produces 65536
bytes except for the last one. Since writing this amount of data to writable
causes it to exceed its highWaterMark
, that stream advises to wait for 'drain'
before proceeding.
So, simply, you always see false
emitted because the readable
stream produces such large chunks when reading. And I would expect that no longer seeing any logs indicates that the transfer has finished. But you would really need to register .on('end')
and .on('error')
to figure that out.
For a simple case like this, it is really better to just use readable.pipe()
, like:
readable.pipe(writable);
This will automatically handle 'drain'
for you. It will even call writable.end()
for you appropriately.
Note that pipe()
will not call writable.end()
if it encounters a read or write error. If you have a long running process which needs to be resilient against stream errors, you need to make sure to handle the errors and close the streams to prevent handle leaks if your program runs for long enough to hit the file descriptor limit.
false
MeansStreams enable programs to scale to large amounts of data by processing it a chunk at a time rather than loading it all into memory. Streams may be organized into pipelines representing various transformations on the data before it is finally written out. When write()
returns false
, it is signaling that it has received enough data to keep it busy for a while. If you keep sending it chunks, it will continue accepting those chunks. However, its backlog of data will grow and start consuming more memory. If you ignore this return value and keep sending it data from a very large source, you may even cause the program to exhaust its address space and crash or get stuck. To keep your code scalable, you should respect the false
return and wait for 'drain
' as you did in your code.
However, false
does not mean anything bad has happened or that there is any error. In fact, in any scenario where the source stream is faster than the destination stream, this is expected to happen and is the way the streams API keeps things safe.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With