I'm trying to write a bash script that will get the output of a command that runs in the background. Unfortunately I can't get it to work, the variable I assign the output to is empty - if I replace the assignment with an echo command everything works as expected though.
#!/bin/bash function test { echo "$1" } echo $(test "echo") & wait a=$(test "assignment") & wait echo $a echo done
This code produces the output:
echo done
Changing the assignment to
a=`echo $(test "assignment") &`
works, but it seems like there should be a better way of doing this.
To just see the output of the command while it runs in the background on your current terminal run end it with '&' on the shell. Generally all the shell output still goes to the console that invoked it. and usually you have to redirect stdout and stderr for that NOT to happen.
Running shell command in background using (&) sign To run a command or a script to the background, terminate it with an ampersand sign (&) at the end as shown. NOTE: Ending the command with the ampersand sign does not detach the command from you.
bash [filename] runs the commands saved in a file. $@ refers to all of a shell script's command-line arguments. $1 , $2 , etc., refer to the first command-line argument, the second command-line argument, etc. Place variables in quotes if the values might have spaces in them.
Bash has indeed a feature called Process Substitution to accomplish this.
$ echo <(yes) /dev/fd/63
Here, the expression <(yes)
is replaced with a pathname of a (pseudo device) file that is connected to the standard output of an asynchronous job yes
(which prints the string y
in an endless loop).
Now let's try to read from it:
$ cat /dev/fd/63 cat: /dev/fd/63: No such file or directory
The problem here is that the yes
process terminated in the meantime because it received a SIGPIPE (it had no readers on stdout).
The solution is the following construct
$ exec 3< <(yes) # Save stdout of the 'yes' job as (input) fd 3.
This opens the file as input fd 3 before the background job is started.
You can now read from the background job whenever you prefer. For a stupid example
$ for i in 1 2 3; do read <&3 line; echo "$line"; done y y y
Note that this has slightly different semantics than having the background job write to a drive backed file: the background job will be blocked when the buffer is full (you empty the buffer by reading from the fd). By contrast, writing to a drive-backed file is only blocking when the hard drive doesn't respond.
Process substitution is not a POSIX sh feature.
Here's a quick hack to give an asynchronous job drive backing (almost) without assigning a filename to it:
$ yes > backingfile & # Start job in background writing to a new file. Do also look at `mktemp(3)` and the `sh` option `set -o noclobber` $ exec 3< backingfile # open the file for reading in the current shell, as fd 3 $ rm backingfile # remove the file. It will disappear from the filesystem, but there is still a reader and a writer attached to it which both can use it. $ for i in 1 2 3; do read <&3 line; echo "$line"; done y y y
Linux also recently got added the O_TEMPFILE option, which makes such hacks possible without the file ever being visible at all. I don't know if bash already supports it.
UPDATE:
@rthur, if you want to capture the whole output from fd 3, then use
output=$(cat <&3)
But note that you can't capture binary data in general: It's only a defined operation if the output is text in the POSIX sense. The implementations I know simply filter out all NUL bytes. Furthermore POSIX specifies that all trailing newlines must be removed.
(Please note also that capturing the output will result in OOM if the writer never stops (yes
never stops). But naturally that problem holds even for read
if the line separator is never written additionally)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With