I've got a function that takes a node id of a graph as input and calculate something in the graph(without altering the graph object), then it saves the results on the filesystem, my code looks like this:
...
# graph file is being loaded
g = loadGraph(gfile='data/graph.txt')
# list of nodeids is being loaded
nodeids = loadSeeds(sfile='data/seeds.txt')
import multiprocessing as mp
# parallel part of the code
print ("entering the parallel part ..")
num_workers = mp.cpu_count() # 4 on my machine
p = mp.Pool(num_workers)
# _myParallelFunction(nodeid) {calculate something for nodeid in g and save it into a file}
p.map(_myParallelFunction, nodeids)
p.close()
...
The problem is when I load the graph into Python it takes lots of memory(about 2G, it's a big graph with thousands of nodes actually), but when it starts to go into the parallel part of the code(the parallel map function execution) it seems that every process is given a separate copy of g and I simply run out of memory on my machine(it's got 6G ram and 3G swap), so I wanted to see that is there a way to give each process the same copy of g so that only the memory to hold one copy of it would be required? any suggestions are appreciated and thanks in advance.
If dividing the graph into smaller parts does not work, you may be able to find a solution using this or multiprocessing.sharedctypes, depending on what kind of object your graph is.
Your comment indicates that you are processing a single node at a time:
# _myParallelFunction(nodeid) {calculate something for nodeid in g and save it into a file}
I would create a generator function that returns a single node from the graph file each time it's called, and pass that generator to the p.map()
function instead of the entire list of nodeids
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With