Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

fs.writeFileSync gives Error:UNKNOWN, correct way to make synchronous file write in nodejs

I have a NodeJS server application. I have this line of code for my logging:

fs.writeFileSync(__dirname + "/../../logs/download.html.xml", doc.toString());

Sometimes it works correctly, but under heavy load it gives this exception:

Error: UNKNOWN, unknown error 'download.html.xml'

PS: I've found a link here: http://www.daveeddy.com/2013/03/26/synchronous-file-io-in-nodejs/ Blogger describes that writeFileSync doesn't really finish writing on return. Is there any correct way to do it in a sync way, i.e. without callbacks?

like image 271
Stepan Yakovenko Avatar asked Sep 27 '15 11:09

Stepan Yakovenko


2 Answers

When you do a writeFileSync you get a file descriptor to the file. The OS limits the number of file descriptors that can be open at a given time. So, under heavy use, you may actually be reaching the limit. One way to find out is this is the case is to use ulimit and set it to unlimited.

Another possibility is IO errors. For example, closing a file descriptor will fail if an IO error occurs.

like image 79
rollingBalls Avatar answered Nov 14 '22 16:11

rollingBalls


For the OP seemed to appreciate the comment, I put it in a response in the hope that it will help other users.

It looks to me that the problem is due to the fact that a lot of descriptors are used up by calling writeFileSync. The link posted in the question seems to confirm the idea too.

Actually, I'm still trying to figure out if there exists a way to know when the write is actually finished under the hood, not only from the point of view of nodejs. Maybe this parameter can help, but I guess that it suffers of the same problem.

Anyway, one can work around the problem by implementing a pool of writers with a queue on which to put his write requests.

The pro is that the number of opened descriptors can be kept under control. Not exactly, indeed, because of the problem mentioned in the link posted by the OP, but of course one can avoid to use up all the resources of a system.

On the other side, unfortunately, there is the problem that this solution tends to occupy far more memory (for one is actually parking there his documents whilst waits for an available worker).

Of course, it could be a suitable solution for burst of requests separated in time, but maybe it doesn't fit well with fixed loads constant in time.

like image 33
skypjack Avatar answered Nov 14 '22 16:11

skypjack