I am trying to implement a script that wait for a specific message in a log file. Once the message is logged then I want to continue the script.
Here's what I am trying out with tail -f
and grep -q
:
# tail -f logfile | grep -q 'Message to continue'
The grep
never quit and so it waits forever even if 'Message to continue' is logged in the file.
When I run this without -f
it seems to work fine.
tail -f
will read a file and display lines later added, it will not terminate (unless a signal like SIGTERM
is sent). grep
is not the blocking part here, tail -f
is. grep
will read from the pipe until it is closed, but it never is because tail -f
does not quit and keep the pipe open.
A solution to your problem would probably be (not tested and very likely to perform badly):
tail -f logfile | while read line; do
echo $line | grep -q 'find me to quit' && break;
done
tail -f logfile | grep --max-count=1 -q 'Message to continue'
Admittedly, it exits when the next line is read, not immediately on the matched one.
After some experimentation, I believe the problem is in the way that bash
waits for all the processes in a pipeline to quit, in some shape or form.
With a plain file 'qqq' of some 360 lines of C source (a variety of program concatenated several times over), and using 'grep -q return', then I observe:
tail -n 300 qqq | grep -q return
does exit almost at once.tail -n 300 -f qqq | grep -q return
does not exit.tail -n 300 -f qqq | strace -o grep.strace -q return
does not exit until interrupted. The grep.strace
file ends with:
read(0, "#else\n#define _XOPEN_SOURCE 500\n"..., 32768) = 10152
close(1) = 0
exit_group(0) = ?
This is one leads me to think that grep
has exited before the interrupt kills tail
; if it was waiting for something, there would be an indication that it received a signal.
A simple program that simulates what the shell does, but without the waiting, indicates that things terminate.
#define _XOPEN_SOURCE 600
#include <stdlib.h>
#include <unistd.h>
#include <stdarg.h>
#include <errno.h>
#include <string.h>
#include <stdio.h>
static void err_error(const char *fmt, ...)
{
int errnum = errno;
va_list args;
va_start(args, fmt);
vfprintf(stderr, fmt, args);
va_end(args);
if (errnum != 0)
fprintf(stderr, "%d: %s\n", errnum, strerror(errnum));
exit(1);
}
int main(void)
{
int p[2];
if (pipe(p) != 0)
err_error("Failed to create pipe\n");
pid_t pid;
if ((pid = fork()) < 0)
err_error("Failed to fork\n");
else if (pid == 0)
{
char *tail[] = { "tail", "-f", "-n", "300", "qqq", 0 };
dup2(p[1], 1);
close(p[0]);
close(p[1]);
execvp(tail[0], tail);
err_error("Failed to exec tail command");
}
else
{
char *grep[] = { "grep", "-q", "return", 0 };
dup2(p[0], 0);
close(p[0]);
close(p[1]);
execvp(grep[0], grep);
err_error("Failed to exec grep command");
}
err_error("This can't happen!\n");
return -1;
}
With a fixed size file, tail -f
isn't going to exit - so the shell (bash
) seems to hang around.
tail -n 300 -f qqq | grep -q return
hung around, but when I used another terminal to add another 300 lines to the file qqq
, the command exited. I interpret this as happening because grep
had exited, so when tail
wrote the new data to the pipe, it got a SIGPIPE and exited, and bash
therefore recognized that all the processes in the pipeline were dead.I observed the same behaviour with both ksh
and bash
. This suggests it is not a bug but some expected behaviour. Testing on Linux (RHEL 5) on an x86_64 machine.
I thought I'd post this as an answer since it explains why the command exits after a second write to the file:
touch xxx
tail -f xxx | grep -q 'Stop'
ps -ef |grep 'grep -q'
# the grep process is there
echo "Stop" >> xxx
ps -ef|grep 'grep -q'
# the grep process actually DID exit
printf "\n" >> xxx
# the tail process exits, probably because it receives a signal when it
# tries to write to a closed pipe
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With