Consider the following two programs:
// works.js
var buffer = new ArrayBuffer(16777216);
var HEAP8 = new Int8Array(buffer);
and
// fails.js
var HEAP8;
var buffer = new ArrayBuffer(16777216);
HEAP8 = new Int8Array(buffer);
Run node
then enter .load works.js
. Everything seems happy. Now exit node
.
Run node
again and enter .load fails.js
.
On my machine loading fails.js
interactively leads the node
process to consume over 1GB of RAM and 100% CPU, and the final statement blocks the interpreter forever. works.js
just runs and returns as expected.
Is there some good reason for the difference? Or should I be writing a bug report?
Curiously enough, running either script directly from the command line works fine:
node works.js # exits normally
node fails.js # exits normally
UPDATE: I'm using node 0.12.0 on OS X 10.9.5
No Buffering − Node. js applications never buffer any data. These applications simply output the data in chunks. Pure JavaScript does not handle straight binary data very well, though JavaScript is Unicode friendly.
It's lightweightNode JS uses an asynchronous event-driven I/O model, ensuring that almost no function in Node directly performs I/O. That makes it lightweight and efficient.
encoding: The format in which the buffer data characters has to be encoded. Its default value is 'utf8'. start: The beginning index of the buffer data from which encoding has to be start. Its default value is 0. end: The last index of the buffer data up to which encoding has to be done.
The toJSON() method returns a JSON object based on the Buffer object.
Is there some good reason for the difference? Or should I be writing a bug report?
I doubt there is a good reason for this, you should file a bug.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With