Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Node.js Stream API leak

while playing with node streams I noticed that pretty much every tutorial teaches something like:

// Get Google's home page.
require('http').get("http://www.google.com/", function(response) {
  // The callback provides the response readable stream.
  // Then, we open our output text stream.
  var outStream = require('fs').createWriteStream("out.txt");

  // Pipe the input to the output, which writes the file.
  response.pipe(outStream);
});

But this is a pretty dangerous piece of code, in my opinion. What happens if the file stream throws an exception at some point? I think the file stream could leak memory because according to the docs the file stream is obviously not closed.

Do I should care? In my opinion node.js streams should handle situations...

like image 435
Kr0e Avatar asked Dec 08 '13 02:12

Kr0e


2 Answers

To avoid the file descriptor leak, you also need:

var outStream = require('fs').createWriteStream("out.txt");

// Add this to ensure that the out.txt's file descriptor is closed in case of error.
response.on('error', function(err) {
  outStream.end();
});

// Pipe the input to the output, which writes the file.
response.pipe(outStream);

Another undocumented method is outStream.destroy(), which closes the descriptor as well, but it seems that outStream.end() is prefered.

like image 152
DS. Avatar answered Nov 15 '22 03:11

DS.


Baring any bugs in Node's VM, if there is an exception that interrupts the operation after the stream has been opened, I'd expect that eventually during garbage collection the VM would detect that nothing is referring to the stream and would collect it, thereby disposing of the resources associated with it.

So I would not call it a "leak".

There can still be problems associated with not handling exceptions, or not closing streams. For instance, on Unix-type systems when a stream is created that corresponds to a file on disk, a file descriptor is used. There's a limit to how many file descriptors can be opened at one time by a process. Consequently, if a process which does not explicitly close its streams manages to leave so many of these unclosed that it hits the file descriptor limit before the next garbage collection, it will crash.

like image 3
Louis Avatar answered Nov 15 '22 05:11

Louis