What is the correct usage of _writev()
in node.js?
The documentation says:
If a stream implementation is capable of processing multiple chunks of data at once, the
writable._writev()
method should be implemented.
It also says:
The primary intent of
writable.cork()
is to avoid a situation where writing many small chunks of data to a stream do not cause a backup in the internal buffer that would have an adverse impact on performance. In such situations, implementations that implement thewritable._writev()
method can perform buffered writes in a more optimized manner.
From a stream implementation perspective this is okay. But from a writable stream consumer perspective, the only way that write
or writev
gets invoked is through Writable.write()
and writable.cork()
I would like to see a small example which would depict the practical use case of implementing _writev()
This exception can be solved by increasing the default memory allocated to our program to the required memory by using the following command. Parameters: SPACE_REQD: Pass the increased memory space (in Megabytes).
To consume a readable stream, we can use the pipe / unpipe methods, or the read / unshift / resume methods. To consume a writable stream, we can make it the destination of pipe / unpipe , or just write to it with the write method and call the end method when we're done.
Transform streams have both readable and writable features. It allows the processing of input data followed by outputting data in the processed format. To create a transform stream, we need to import the Transform class from the Node. js stream module.
An example of a Duplex stream is a Socket, which provides two channels to send and receive data. Other examples of the Duplex streams are: TCP sockets. zlib streams.
A writev
method can be added to the instance, in addition to write
, and if the streams contains several chunks that method will be picked instead of write
. For example Elasticsearch allows you to bulk insert records; so if you are creating a Writable stream to wrap Elasticsearch, it makes sense to have a writev
method doing a single bulk insert rather than several individual ones, it is far more efficient. The same holds true, for example, for MongoDB and so on.
This post (not mine) shows an Elasticsearch implementation https://medium.com/@mark.birbeck/using-writev-to-create-a-fast-writable-stream-for-elasticsearch-ac69bd010802
The _writev()
will been invoked when using uncork()
. There is a simple example in node document.
stream.cork();
stream.write('some ');
stream.write('data ');
process.nextTick(() => stream.uncork());
More See,
https://nodejs.org/api/stream.html#stream_writable_uncork https://github.com/nodejs/node/blob/master/lib/_stream_writable.js#L257
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With