Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

multiprocessing global variable memory copying

I am running a program which loads 20 GB data to the memory at first. Then I will do N (> 1000) independent tasks where each of them may use (read only) part of the 20 GB data. I am now trying to do those tasks via multiprocessing. However, as this answer says, the entire global variables are copied for each process. In my case, I do not have enough memory to perform more than 4 tasks as my memory is only 96 GB. I wonder if there is any solution to this kind of problem so that I can fully use all my cores without consuming too much memory.

like image 258
Wang Avatar asked Oct 24 '16 15:10

Wang


People also ask

Is memory shared in multiprocessing?

shared_memory — Shared memory for direct access across processes. New in version 3.8. This module provides a class, SharedMemory , for the allocation and management of shared memory to be accessed by one or more processes on a multicore or symmetric multiprocessor (SMP) machine.

Can multiprocessing access global variables?

You can share a global variable with all child workers processes in the multiprocessing pool by defining it in the worker process initialization function. In this tutorial you will discover how to share global variables with all workers in the Python process pool.

Do Python processes share memory?

Shared memory can be a very efficient way of handling data in a program that uses concurrency. Python's mmap uses shared memory to efficiently share large amounts of data between multiple Python processes, threads, and tasks that are happening concurrently.

What is multiprocess synchronization?

Synchronization between processes Multiprocessing is a package which supports spawning processes using an API. This package is used for both local and remote concurrencies. Using this module, programmer can use multiple processors on a given machine. It runs on Windows and UNIX os.


1 Answers

In linux, forked processes have a copy-on-write view of the parent address space. forking is light-weight and the same program runs in both the parent and the child, except that the child takes a different execution path. As a small exmample,

import os
var = "unchanged"
pid = os.fork()
if pid:
    print('parent:', os.getpid(), var)
    os.waitpid(pid, 0)
else:
    print('child:', os.getpid(), var)
    var = "changed"

# show parent and child views
print(os.getpid(), var)

Results in

parent: 22642 unchanged
child: 22643 unchanged
22643 changed
22642 unchanged

Applying this to multiprocessing, in this example I load data into a global variable. Since python pickles the data sent to the process pool, I make sure it pickles something small like an index and have the worker get the global data itself.

import multiprocessing as mp
import os

my_big_data = "well, bigger than this"

def worker(index):
    """get char in big data"""
    return my_big_data[index]

if __name__ == "__main__":
    pool = mp.Pool(os.cpu_count())
    for c in pool.imap_unordered(worker, range(len(my_big_data)), chunksize=1):
        print(c)

Windows does not have a fork-and-exec model for running programs. It has to start a new instance of the python interpreter and clone all relevant data to the child. This is a heavy lift!

like image 84
tdelaney Avatar answered Oct 10 '22 04:10

tdelaney