I have read similar questions about this topic but none of them help me with the following problem:
I have a bash script that looks like this:
#!/bin/bash
for filename in /home/user/Desktop/emak/*.fa; do
mkdir ${filename%.*}
cd ${filename%.*}
mkdir emak
cd ..
done
This script basically does the following:
The real task does something much computational expensive than create the "emak" file...
I have about thousands of files to iterate through. As each iteration is independent from the previous one, I will like to split it in different processors ( I have 24 cores) so I can do multiples files at the same time.
I read some previous post about running in parallel (using: GNU) but I do not see a clear way to apply it in this case.
thanks
No need for parallel; you can simply use
N=10
for filename in /home/user/Desktop/emak/*.fa; do
mkdir -p "${filename%.*}/emak" &
(( ++count % N == 0)) && wait
done
The second line pauses every Nth job to allow all the previous jobs to complete before continuing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With