Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Quick-and-dirty way to ensure only one instance of a shell script is running at a time

People also ask

How will you make sure only one instance of a shell script can run?

There are examples on the man page as to how to use it in a shell script. The flock() system call is not POSIX and does not work for files on NFS mounts. Running from a Cron job I use flock -x -n %lock file% -c "%command%" to make sure only one instance is ever executing.

What should I do to run a script on specific time without cron?

If not using cron, you will have to arrange for the script to re-execute itself at a particular time in the future, maybe using at (which is part of cron), or by sleeping (which would make the executions drift over time), and you would additionally have to set up an initial execution of the script at reboots (again, ...

How do you lock a file in shell script?

The "|| exit 1" instructs the script to exit if the exec command fails. The above line obtains a lock on file descriptor 100. Basically holding a lock on the file until the shell closes. Since scripts run in a sub-shell, the file will be closed (or unlocked) when the script exits.


Use flock(1) to make an exclusive scoped lock a on file descriptor. This way you can even synchronize different parts of the script.

#!/bin/bash

(
  # Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
  flock -x -w 10 200 || exit 1

  # Do stuff

) 200>/var/lock/.myscript.exclusivelock

This ensures that code between ( and ) is run only by one process at a time and that the process doesn’t wait too long for a lock.

Caveat: this particular command is a part of util-linux. If you run an operating system other than Linux, it may or may not be available.


Naive approaches that test the existence of "lock files" are flawed.

Why? Because they don't check whether the file exists and create it in a single atomic action. Because of this; there is a race condition that WILL make your attempts at mutual exclusion break.

Instead, you can use mkdir. mkdir creates a directory if it doesn't exist yet, and if it does, it sets an exit code. More importantly, it does all this in a single atomic action making it perfect for this scenario.

if ! mkdir /tmp/myscript.lock 2>/dev/null; then
    echo "Myscript is already running." >&2
    exit 1
fi

For all details, see the excellent BashFAQ: http://mywiki.wooledge.org/BashFAQ/045

If you want to take care of stale locks, fuser(1) comes in handy. The only downside here is that the operation takes about a second, so it isn't instant.

Here's a function I wrote once that solves the problem using fuser:

#       mutex file
#
# Open a mutual exclusion lock on the file, unless another process already owns one.
#
# If the file is already locked by another process, the operation fails.
# This function defines a lock on a file as having a file descriptor open to the file.
# This function uses FD 9 to open a lock on the file.  To release the lock, close FD 9:
# exec 9>&-
#
mutex() {
    local file=$1 pid pids 

    exec 9>>"$file"
    { pids=$(fuser -f "$file"); } 2>&- 9>&- 
    for pid in $pids; do
        [[ $pid = $$ ]] && continue

        exec 9>&- 
        return 1 # Locked by a pid.
    done 
}

You can use it in a script like so:

mutex /var/run/myscript.lock || { echo "Already running." >&2; exit 1; }

If you don't care about portability (these solutions should work on pretty much any UNIX box), Linux' fuser(1) offers some additional options and there is also flock(1).


Here's an implementation that uses a lockfile and echoes a PID into it. This serves as a protection if the process is killed before removing the pidfile:

LOCKFILE=/tmp/lock.txt
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
    echo "already running"
    exit
fi

# make sure the lockfile is removed when we exit and then claim it
trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
echo $$ > ${LOCKFILE}

# do stuff
sleep 1000

rm -f ${LOCKFILE}

The trick here is the kill -0 which doesn't deliver any signal but just checks if a process with the given PID exists. Also the call to trap will ensure that the lockfile is removed even when your process is killed (except kill -9).


There's a wrapper around the flock(2) system call called, unimaginatively, flock(1). This makes it relatively easy to reliably obtain exclusive locks without worrying about cleanup etc. There are examples on the man page as to how to use it in a shell script.


To make locking reliable you need an atomic operation. Many of the above proposals are not atomic. The proposed lockfile(1) utility looks promising as the man-page mentioned, that its "NFS-resistant". If your OS does not support lockfile(1) and your solution has to work on NFS, you have not many options....

NFSv2 has two atomic operations:

  • symlink
  • rename

With NFSv3 the create call is also atomic.

Directory operations are NOT atomic under NFSv2 and NFSv3 (please refer to the book 'NFS Illustrated' by Brent Callaghan, ISBN 0-201-32570-5; Brent is a NFS-veteran at Sun).

Knowing this, you can implement spin-locks for files and directories (in shell, not PHP):

lock current dir:

while ! ln -s . lock; do :; done

lock a file:

while ! ln -s ${f} ${f}.lock; do :; done

unlock current dir (assumption, the running process really acquired the lock):

mv lock deleteme && rm deleteme

unlock a file (assumption, the running process really acquired the lock):

mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme

Remove is also not atomic, therefore first the rename (which is atomic) and then the remove.

For the symlink and rename calls, both filenames have to reside on the same filesystem. My proposal: use only simple filenames (no paths) and put file and lock into the same directory.