The stream stdout is line-buffered when it points to a terminal. Partial lines will not appear until fflush(3) or exit(3) is called, or a newline is printed. This can produce unexpected results, especially with debugging output.
Line buffering - characters are transmitted to the system as a block when a new-line character is encountered. Line buffering is meaningful only for text streams and UNIX file system files. Full buffering - characters are transmitted to the system as a block when a buffer is filled.
The tee utility shall copy standard input to standard output, making a copy in zero or more files. The tee utility shall not buffer output.
The main reason why buffering exists is to amortize the cost of these system calls. This is primarily important when the program is doing a lot of these write calls, as the amortization is only effective when the system call overhead is a significant percentage of the program's time.
you can try stdbuf
$ stdbuf --output=L ./a | tee output.txt
(big) part of the man page:
-i, --input=MODE adjust standard input stream buffering
-o, --output=MODE adjust standard output stream buffering
-e, --error=MODE adjust standard error stream buffering
If MODE is 'L' the corresponding stream will be line buffered.
This option is invalid with standard input.
If MODE is '0' the corresponding stream will be unbuffered.
Otherwise MODE is a number which may be followed by one of the following:
KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G, T, P, E, Z, Y.
In this case the corresponding stream will be fully buffered with the buffer
size set to MODE bytes.
keep this in mind, though:
NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does
for e.g.) then that will override corresponding settings changed by 'stdbuf'.
Also some filters (like 'dd' and 'cat' etc.) dont use streams for I/O,
and are thus unaffected by 'stdbuf' settings.
you are not running stdbuf
on tee
, you're running it on a
, so this shouldn't affect you, unless you set the buffering of a
's streams in a
's source.
Also, stdbuf
is not POSIX, but part of GNU-coreutils.
Try unbuffer
which is part of the expect
package. You may already have it on your system.
In your case you would use it like this:
./a | unbuffer -p tee output.txt
(-p
is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments)
You may also try to execute your command in a pseudo-terminal using the script
command (which should enforce line-buffered output to the pipe)!
script -q /dev/null ./a | tee output.txt # Mac OS X, FreeBSD
script -c "./a" /dev/null | tee output.txt # Linux
Be aware the script
command does not propagate back the exit status of the wrapped command.
You can use setlinebuf from stdio.h.
setlinebuf(stdout);
This should change the buffering to "line buffered".
If you need more flexibility you can use setvbuf.
The unbuffer
command from the expect
package at the @Paused until further notice answer did not worked for me the way it was presented.
Instead of using:
./a | unbuffer -p tee output.txt
I had to use:
unbuffer -p ./a | tee output.txt
(
-p
is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments)
The expect
package can be installed on:
pacman -S expect
brew install expect
I recently had buffering problems with python
inside a shell script (when trying to append timestamp to its output). The fix was to pass -u
flag to python
this way:
run.sh
with python -u script.py
unbuffer -p /bin/bash run.sh 2>&1 | tee /dev/tty | ts '[%Y-%m-%d %H:%M:%S]' >> somefile.txt
ts
program (timestamp) can be installed with the moreutils
package.Recently, also had problems with grep
buffering the output, when I used the argument grep --line-buffered
on grep
to it stop buffering the output.
If you use the C++ stream classes instead, every std::endl
is an implicit flush. Using C-style printing, I think the method you suggested (fflush()
) is the only way.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With