Commands like openssl
have arguments like -out <file>
for output files. I'd like to capture the content of these output files in shell variables for use in other commands without creating temporary files. For example, to generate a self-signed certificate, one can use:
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout KEYFILE -out CERTFILE 2>/dev/null
The closest I've got to capture both output files is to echo them to stdout via process substitution but this is not ideal because one would still have to parse them apart.
openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(key=$(cat -); echo $key) -out >(cert=$(cat -); echo $cert) 2>/dev/null
Is there a clean way to capture the content of output files in shell variables?
To store the output of a command in a variable, you can use the shell command substitution feature in the forms below: variable_name=$(command) variable_name=$(command [option ...] arg1 arg2 ...) OR variable_name='command' variable_name='command [option ...] arg1 arg2 ...'
$? is a built-in variable that stores the exit status of a command, function, or the script itself. $? reads the exit status of the last command executed.
Most modern shells now support /dev/stdout as a file name to redirect to stdout. This is good enough for single file solutions but for two output files you need to go with "process substitution".
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 -keyout >(echo "keyout='$( cat )'" ) -out >(echo out="'$( cat )'" ) )"
This uses process substitution to direct each "file" to a separate process that prints to stdout an assignment of the calculated values. The whole thing is then passed to an eval to do the actual assignments.
Keeping output of stderr to show any error messages that pops up. Useful to log it in times of troubles.
Edit: incorporating Charles Duffy's good paranoia:
flockf="$(mktemp -t tmp.lock.$$.XXXXXX )" || exit $?; eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \ -keyout >( set -x; 99>"$flockf" && \ flock -x "$flockf" printf "keyout=%q " "$( cat )"; ) \ -out >( set -x; 99>"$flockf" && \ flock -x "$flockf" printf "out=%q " "$( cat )"; ) \ )" ; rm -f "$flockf"
An extension to Gilbert's answer providing additional paranoia:
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(printf 'keyout=%q\n' "$(</dev/stdin)") \
-out >(printf 'out=%q\n' "$(</dev/stdin)") )"
(Note that this is not suitable if your data contains NULs, which bash cannot store in a native shell variable; in that case, you'll want to assign the contents to your variables in base64
-encoded form).
Unlike echo "keyout='$(cat)'"
, printf 'keyout=%q\n' "$(cat)"
ensures that even malicious contents cannot be eval
uated by the shell as a command substitution, redirection, or otherwise content other than literal data.
To explain why this is necessary, let's take a simplified case:
write_to_two_files() { printf 'one\n' >"$1"; printf 'two\n' >"$2"; }
write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'")
...we get output akin to (but with no particular ordering):
two='two'
one='one'
...which, when eval
ed, sets two variables:
$ eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
$ declare -p one two
declare -- one="one"
declare -- two="two"
However, let's say that our program behaves a bit differently:
## Demonstrate why eval'ing content created by echoing data is dangerous
write_to_two_files() {
printf "'%s'\n" '$(touch /tmp/i-pwned-your-box)' >"$1"
echo "two" >"$2"
}
eval "$(write_to_two_files >(echo "one='$(cat)'") >(echo "two='$(cat)'"))"
ls -l /tmp/i-pwned-your-box
Instead of merely assigning the output to a variable, we evaluated it as code.
If you're further interested in ensuring that the two print operations happen at different times (preventing their output from being intermingled), it's useful to further add locking. This does involve a temporary file, but does not write your keying material to disk (avoidance of which is the most compelling reason to avoid temporary file usage):
lockfile=$(mktemp -t output.lck.XXXXXX)
eval "$( openssl req -new -newkey rsa:2048 -subj / -days 365 -nodes -x509 \
-keyout >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'keyout=%q\n' "$in") \
-out >(in=$(cat); exec 99>"$lockfile" && flock -x 99 && printf 'out=%q\n' "$in") )"
Note that we're only blocking for the write, not the read, so we can't get into race conditions (ie. where openssl isn't finishing writing to file-A because it's blocked on a write to file-B, which can never complete because the subshell on the read side of the file-A write holds the lock).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With