I have a program in Python that basically does the following:
for j in xrange(200):
# 1) Compute a bunch of data
# 2) Write data to disk
1) takes about 2-5 minutes
2) takes about ~1 minute
Note that there is too much data to keep in memory.
Ideally what I would like to do is write the data to disk in a way that avoids idling the CPU. Is this possible in Python? Thanks!
The easiest way of running a python script to run in the background is to use cronjob feature (in macOS and Linux). In windows, we can use Windows Task Scheduler. You can then give the path of your python script file to run at a specific time by giving the time particulars.
You could try using multiple processes like this:
import multiprocessing as mp
def compute(j):
# compute a bunch of data
return data
def write(data):
# write data to disk
if __name__ == '__main__':
pool = mp.Pool()
for j in xrange(200):
pool.apply_async(compute, args=(j, ), callback=write)
pool.close()
pool.join()
pool = mp.Pool()
will create a pool of worker processes. By default, the number of workers equals the number of CPU cores your machine has.
Each pool.apply_async call queues a task to be run by a worker in the pool of worker processes. When a worker is available, it runs compute(j)
. When the worker returns a value, data
, a thread in the main process runs the callback function write(data)
, with data
being the data returned by the worker.
Some caveats:
j
ranging from 0 to 199. One way around this problem
would be to write the data to a sqlite (or other kind of) database
with j
as one of the fields of data. Then, when you wish to read
the data in order, you could SELECT * FROM table ORDER BY j
.Using multiple processes will increase the amount of memory required as data is generated by the worker processes and data waiting to be written to disk accumulates in the Queue. You might be able to reduce the amount of memory required by using NumPy arrays. If that is not possible, then you might have to reduce the number of processes:
pool = mp.Pool(processes=1)
That will create one worker process (to run compute
), leaving the
main process to run write
. Since compute
takes longer than
write
, the Queue won't get backed up with more than one chunk of
data to be written to disk. However, you would still need enough memory
to compute on one chunk of data while writing a different chunk of
data to disk.
If you do not have enough memory to do both simultaneously, then you have no choice -- your original code, which runs compute
and write
sequentially, is the only way.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With