Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiprocessing in Python with read-only shared memory?

I have a single-threaded Python program, and I'd like to modify it to make use of all 32 processors on the server it runs on. As I envision it, each worker process would receive its job from a queue and submit its output to a queue. To complete its work, however, each worker process would need read-only access to a complex in-memory data structure--many gigabytes of dicts and objects that link to each other. In python, is there a simple way to share this data structure, without making a copy of it for each worker process?

Thanks.

like image 322
Jeff Avatar asked Oct 14 '13 17:10

Jeff


People also ask

Does Python multiprocessing use shared memory?

multiprocessing is a drop in replacement for Python's multiprocessing module. It supports the exact same operations, but extends it, so that all tensors sent through a multiprocessing. Queue , will have their data moved into shared memory and will only send a handle to another process.

How data is shared in multiprocessing in Python?

The multiprocessing module implements two objects to share data between processes, Value and Array . We instantiate Value by declaring its typecode from the array module (“l” for signed long) and access its value using the value property.

Does multiprocessing in Python use multiple cores?

Python processes typically use a single thread because of the GIL. Despite the GIL, libraries that perform computationally heavy tasks like numpy, scipy and pytorch utilise C-based implementations under the hood, allowing the use of multiple cores.


2 Answers

If you are using the CPython (or PyPy) implementation of Python, then the global interpreter lock (GIL) will prevent more than one thread from operating on Python objects at a time.

So if you are using such an implementation, you'll need to use multiple processes instead of multiple threads to take advantage of your 32 processors.

You could use the the standard library's multiprocessing or concurrent.futures modules to spawn the worker processes. There are also many third-party options. Doug Hellman's tutorial is a great introduction to the multiprocessing module.

Since you only need read-only access to the data structure, if you assign the complex data structure to a global variable before you spawn the processes, then all the processes will have access to this global variable.

When you spawn a process, the globals from the calling module are copied to the spawned process. However, on Linux, which has copy-on-write, the very same data structure(s) is used by the spawned processes, so no extra memory is required. Only when a process modifies the data structure is it copied to a new location.

On Windows, since there is no fork, each spawned process calls python and re-imports the calling module, so each process requires memory for its own separate copy of the huge data structure. There must be some other way to share data structures on Windows, but I'm unaware of the details. (Edit: POSH may be a solution to the shared-memory problem, but I haven't tried it myself.)

like image 106
unutbu Avatar answered Oct 05 '22 02:10

unutbu


To add demonstration of unutbu's answer above, here is code showing that it is in fact COW shared memory (CPython 3.6, Mac OS)

main_shared.py

import multiprocessing
from time import sleep


my_global = None


def test():
    global my_global
    read_only_secs = 3
    while read_only_secs > 0:
        sleep(1)
        print(f'child proc global: {my_global} at {hex(id(my_global))}')
        read_only_secs -= 1
    print('child proc writing to copy-on-write...')
    my_global = 'something else'
    while True:
        sleep(1)
        print(f'child proc global: {my_global} at {hex(id(my_global))}')


def set_func():
    global my_global
    my_global = [{'hi': 1, 'bye': 'foo'}]

if __name__ == "__main__":
    print(f'main proc global: {my_global} at {hex(id(my_global))}')
    set_func()
    print(f'main proc global: {my_global} at {hex(id(my_global))}')
    p1 = multiprocessing.Process(target=test)
    p1.start()

    while True:
        sleep(1)
        print(f'main proc global: {my_global} at {hex(id(my_global))}')

Output

$ python main_shared.py 
main proc global: None at 0x101b509f8
main proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
child proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
main proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
child proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
main proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
child proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
child proc writing to copy-on-write...
main proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
child proc global: something else at 0x1022ea3b0
main proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
child proc global: something else at 0x1022ea3b0
main proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
child proc global: something else at 0x1022ea3b0
main proc global: [{'hi': 1, 'bye': 'foo'}] at 0x102341708
like image 41
ryanwc Avatar answered Oct 05 '22 01:10

ryanwc