A simple and seemingly reliable way to do locking under bash is:
exec 9>>lockfile
flock 9
However, bash notoriously propagates such a fd lock to all forked stuff including executed programs etc.
Is there any way to tell bash not to duplicate the fd? It's great that the lock is attached to a fd which gets removed when the program terminates, no matter how it gets terminated.
I know I can do stuff like:
run_some_prog 9>&-
But that is quite tedious.
Is there any better solution?
You can use the -o
command line option to flock(1)
(long option --close
, which might be better for writing in scripts for the self-documenting nature) to specify that the file descriptor should be closed before executing commands via flock(1)
:
-o, --close
Close the file descriptor on which the lock is held
before executing command. This is useful if command
spawns a child process which should not be holding
the lock.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With