Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Node.js fs.open() hangs after trying to open more than 4 named pipes (FIFOs)

I have a node.js process that needs to read from multiple named pipes fed by different other processes as an IPC method.

I realized after opening and creating read streams from more than four fifos, that fs seems to no longer be able to open fifos and just hangs there.

It seems that this number is a bit low, considering that it is possible to open thousands of files concurrently without trouble (for instance by replacing mkfifo by touch in the following script).

I tested with node.js v10.1.0 on MacOS 10.13 and with node.js v8.9.3 on Ubuntu 16.04 with the same result.


The faulty script

And a script that displays this behavior:

var fs = require("fs");
var net = require("net");
var child_process = require('child_process');

var uuid = function() {
    for (var i = 0, str = ""; i < 32; i++) {
        var number = Math.floor(Math.random() * 16);
        str += number.toString(16);
    }
    return str;
}

function setupNamedPipe(cb) {
    var id = uuid();
    var fifoPath = "/tmp/tmpfifo/" + id;

    child_process.exec("mkfifo " + fifoPath, function(error, stdout, stderr) {
        if (error) {
            return;
        }

        fs.open(fifoPath, 'r+', function(error, fd) {
            if (error) {
                return;
            }

            var stream = fs.createReadStream(null, {
                fd
            });
            stream.on('data', function(data) {
                console.log("FIFO data", data.toString());
            });
            stream.on("close", function(){
                console.log("close");
            });
            stream.on("error", function(error){
                console.log("error", error);
            });

            console.log("OK");
            cb();
        });
    });
}

var i = 0;
function loop() {
    ++i;
    console.log("Open ", i);
    setupNamedPipe(loop);
}

child_process.exec("mkdir -p /tmp/tmpfifo/", function(error, stdout, stderr) {
    if (error) {
        return;
    }

    loop();
});

This script doesn't clean behind him, don't forget to rm -r /tmp/tmpfifo

Repl.it link


NOTE, The following part of this questions is related to what I already tried to answer the question but might not be central to it


Two interesting facts with this script

  • when writing twice in one of the FIFO, (ie echo hello > fifo) Node is then able to open one more fifo, but no longer receives from the one in which we wrote
  • when the read stream is created by directly providing the path to the fifo (instead of fd), the script doesn't block any more, but apparently no longer receive what is written in any of the FIFOs

Debug informations

I then tried to verify whether that could be related to some OS limit, for instance the number of file descriptor open.

Ouput of ulimit -a on the Mac is

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1418
virtual memory          (kbytes, -v) unlimited

Nothing points to some limit at 4.


C++ tentative

I then tried to write a similar script in C++. In C++ the script successfully open a hundred fifos.

Note that there are a few differences between the two implementations. In the C++ one,

  • the script only open the fifos,
  • there is no tentative for reading,
  • and no multithreading

#include <string>
#include <cstring>
#include <sys/stat.h>
#include <fcntl.h>
#include <iostream>

int main(int argc, char** argv)
{

    for (int i=0; i < 100; i++){
        std::string filePath = "/tmp/tmpfifo/" + std::to_string(i);
        auto hehe = open(filePath.c_str(), O_RDWR);
        std::cout << filePath << " " << hehe << std::endl;
    }

    return 0;
}

As a side note, the fifos need to be created before executing the script, for instance with

for i in $(seq 0 100); do mkfifo /tmp/tmpfifo/$i; done


Potential Node.js related issue

After a bit of search, it also seems to be linked to that issue on the Node.js Github:

https://github.com/nodejs/node/issues/1941.

But people seems to be complaining of the opposite behavior (fs.open() throwing EMFILE errors and not hanging silently...)


As you can see I tried to search in many directions and all of this lead me to my question:

Do you know what could cause this behavior?

Thank you

like image 670
Sami Avatar asked Oct 02 '18 12:10

Sami


People also ask

What is true about appendFile function of fs module?

The fs. appendFile() method is used to asynchronously append the given data to a file. A new file is created if it does not exist. The options parameter can be used to modify the behavior of the operation.

What is fs in Nodejs?

Node.js as a File Server var fs = require('fs'); Common use for the File System module: Read files. Create files. Update files.

What is writeFileSync in node JS?

writeFileSync() to write data in files. The latter is a synchronous method for writing data in files. fs. writeFileSync() is a synchronous method, and synchronous code blocks the execution of program. Hence, it is preferred and good practice to use asynchronous methods in Node.

What is fs createWriteStream?

Writable: streams to which we can write data. For example, fs. createWriteStream() lets us write data to a file using streams. Readable: streams from which data can be read. For example: fs.


1 Answers

So I asked the question on the Node.js Github, https://github.com/nodejs/node/issues/23220

From the solution:

Dealing with FIFOs is currently a bit tricky.

The open() system call blocks on FIFOs by default until the other side of the pipe has been opened as well. Because Node.js uses a threadpool for file-system operations, opening multiple pipes where the open() calls don’t finish exhausts this threadpool.

The solution is to open the file in non-blocking mode, but that has the difficulty that the other fs calls aren’t built with non-blocking file descriptors in mind; net.Socket is, however.

So, the solution would look something like this:

fs.open('path/to/fifo/', fs.constants.O_RDONLY | fs.constants.O_NONBLOCK, (err, fd) => {
  // Handle err
  const pipe = new net.Socket({ fd });
  // Now `pipe` is a stream that can be used for reading from the FIFO.
});
like image 133
Sami Avatar answered Sep 24 '22 15:09

Sami