I have the following code that writes the md5sums to a logfile
for file in files_output: p=subprocess.Popen(['md5sum',file],stdout=logfile) p.wait()
Will these be written in parallel? i.e. if md5sum takes a long time for one of the files, will another one be started before waiting for a previous one to complete?
If the answer to the above is yes, can I assume the order of the md5sums written to logfile may differ based upon how long md5sum takes for each file? (some files can be huge, some small)
We can use subprocess module to create multiple child processes and they are run in parallel. First, we search the current directory and obtain a list of all the compressed files. Next, we create a list of the sequence of program arguments, each list element corresponding to each file.
The subprocess module defines one class, Popen and a few wrapper functions that use that class. The constructor for Popen takes arguments to set up the new process so the parent can communicate with it via pipes. It provides all of the functionality of the other modules and functions it replaces, and more.
Popen is nonblocking. call and check_call are blocking. You can make the Popen instance block by calling its wait or communicate method.
Also your way of making p.wait()
after the for
loop will wait just for the last of md5sum processes to finish and the rest of them might still be running.
But you can modify this code slightly to still have benefits of parallel processing and predictability of synchronized output if you collect the md5sum output into temporary files and collect it back into one file once all processes are done.
import subprocess import os processes = [] for file in files_output: f = os.tmpfile() p = subprocess.Popen(['md5sum',file],stdout=f) processes.append((p, f)) for p, f in processes: p.wait() f.seek(0) logfile.write(f.read()) f.close()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With