I have a while loop in Bash handled like this:
while IFS=$'\t' read -r -a line;
do
myprogram ${line[0]} ${line[1]} ${line[0]}_vs_${line[1]}.result;
done < fileinput
It reads from a file with this structure, for reference:
foo bar
baz foobar
and so on (tab-delimited).
I would like to parallelize this loop (since the entries are a lot and processing can be slow) using GNU parallel, however the examples are not clear on how I would assign each line to the array, like I do here.
What would be a possible solution (alternatives to GNU parallel work as well)?
parallel
isn't strictly necessary here; just start all the processes in the background, then wait for them to complete. The array is also unnecessary, as you can give read
more than one variable to populate:
while IFS=$'\t' read -r f1 f2;
do
myprogram "$f1" "$f2" "${f1}_vs_${f2}.result" &
done < fileinput
wait
This does start a single job for every item in your list, whereas parallel
can limit the number of jobs running at once. You can accomplish the same in bash
, but it's tricky.
I'd like @chepner hack. And it seems not so tricky accomplish similar behaviour with limiting number of parallel executions:
while IFS=$'\t' read -r f1 f2;
do
myprogram "$f1" "$f2" "${f1}_vs_${f2}.result" &
# At most as number of CPU cores
[ $( jobs | wc -l ) -ge $( nproc ) ] && wait
done < fileinput
wait
It limit execution at max of number of CPU cores present on system. You may easily vary that by replace $( nproc )
by desired amount.
Meantime you should understand what it is not honest distribution. So, it not start new thread just after one finished. Instead it just wait finishing all, after start max amount. So summary throughput may be slightly less than with parallel. Especially if run time of your program may vary in big range. If time spent on each invocation is almost same then summary time also should be roughly equivalent.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With