I've got two node threads running, one watching a directory for file consumption and another that is responsible for writing files to given directories.
Typically they won't be operating on the same directory, but for an edge case I'm working on they will be.
It appears that the consuming app is grabbing the files before they are fully written, resulting in corrupt files.
Is there a way I can lock the file until the writing is complete? I've looked into the lockfile
module, but unfortunately I don't believe it will work for this particular application.
=====
The full code is far more than makes sense to put here, but the gist of it is this:
Listener:
fs.writeFile
Watcher:
chokidar
to track added files in each watched directoryfs.access
is called to ensure we have access to the file
fs.access
seems to be unfazed by file being writtenfs.createReadStream
and then sent to server
In this case the file is exported to the watched directory and then reimported by the watch process.
Right-click on the file. In the menu that appears, select Lock File.
File Locks in Java The Java NIO library enables locking files at the OS level. The lock() and tryLock() methods of a FileChannel are for that purpose. We can create a FileChannel through either a FileInputStream, a FileOutputStream, or a RandomAccessFile. All three have a getChannel() method that returns a FileChannel.
Synchronous method to writing into a file: To write the file in a synchronous mode we use a method in fs module which is writeFileSync(). It takes two parameters first is the file name with the complete path in which content is to be written and the second parameter is the data to be written in the file.
lock file prevents changes to your local repository from happening from outside of the currently running git process so as to ensure multiple git processes are not altering or changing the same repository internals at the same time.
I'd use proper-lockfile for this. You can specify an amount of retries or use a retry config object to use an exponential backoff strategy. That way you can handle situations where two processes need to modify the same file at the same time.
Here's a simple example with some retry options:
const lockfile = require('proper-lockfile');
const Promise = require('bluebird');
const fs = require('fs-extra');
const crypto = require('crypto'); // random buffer contents
const retryOptions = {
retries: {
retries: 5,
factor: 3,
minTimeout: 1 * 1000,
maxTimeout: 60 * 1000,
randomize: true,
}
};
let file;
let cleanup;
Promise.try(() => {
file = '/var/tmp/file.txt';
return fs.ensureFile(file); // fs-extra creates file if needed
}).then(() => {
return lockfile.lock(file, retryOptions);
}).then(release => {
cleanup = release;
let buffer = crypto.randomBytes(4);
let stream = fs.createWriteStream(file, {flags: 'a', encoding: 'binary'});
stream.write(buffer);
stream.end();
return new Promise(function (resolve, reject) {
stream.on('finish', () => resolve());
stream.on('error', (err) => reject(err));
});
}).then(() => {
console.log('Finished!');
}).catch((err) => {
console.error(err);
}).finally(() => {
cleanup && cleanup();
});
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With