Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Very simple concurrent programming in Python

I have a simple Python script that uses two much more complicated Python scripts, and does something with the results.

I have two modules, Foo and Bar, and my code is like the following:

import Foo
import Bar

output = []

a = Foo.get_something()
b = Bar.get_something_else()

output.append(a)
output.append(b)

Both methods take a long time to run, and neither depends on the other, so the obvious solution is to run them in parallel. How can I achieve this, but make sure that the order is maintained: Whichever one finishes first must wait for the other one to finish before the script can continue.

Let me know if I haven't made myself clear enough, I've tried to make the example code as simple as possible.

like image 940
Ivy Avatar asked May 07 '12 00:05

Ivy


People also ask

How do you write a concurrent program in Python?

Many times the concurrent processes need to access the same data at the same time. Another solution, than using of explicit locks, is to use a data structure that supports concurrent access. For example, we can use the queue module, which provides thread-safe queues. We can also use multiprocessing.

What is Concurrent Programming in Python?

Concurrency in programming means that multiple computations happen at the same time. For example, you may have multiple Python programs running on your computer. Or you may connect multiple computers via a network (e.g., Ethernet) that work together towards a common objective (e.g., distributed data analytics).

Can Python handle concurrent requests?

Both multithreading and multiprocessing allow Python code to run concurrently. Only multiprocessing will allow your code to be truly parallel. However, if your code is IO-heavy (like HTTP requests), then multithreading will still probably speed up your code.

Is Python threading parallel or concurrent?

In fact, a Python process cannot run threads in parallel but it can run them concurrently through context switching during I/O bound operations. This limitation is actually enforced by GIL. The Python Global Interpreter Lock (GIL) prevents threads within the same process to be executed at the same time.


2 Answers

import multiprocessing

import Foo
import Bar

results = {}

def get_a():
    results['a'] = Foo.get_something()



def get_b():
    results['b'] = Bar.get_something_else()

process_a = multiprocessing.Process(target=get_a)
process_b = multiprocessing.Process(target=get_b)


process_b.start()
process_a.start()


process_a.join
process_b.join

Here is the process version of your program.

NOTE: that in threading there are shared datastructures so you have to worry about locking which avoids wrong manipulation of data plus as amber mentioned above it also has a GIL (Global interpreter Lock) problem and since both of your tasks are CPU intensive then this means that it will take more time because of the calls notifying the threads of thread acquisition and release. If however your tasks were I/O intensive then it does not effect that much.

Now since there are no shared datastructures in a process thus no worrying about LOCKS and since it works irrespective of the GIL so you actually enjoy the real power of multiprocessors.

Simple note to remember: process is the same as thread just without using a shared datastructures (everything works in isolation and is focused on messaging.)

check out dabeaz.com he gave a good presentation on concurrent programming once.

like image 134
fazkan Avatar answered Sep 26 '22 23:09

fazkan


In general, you'd use threading to do this.

First, create a thread for each thing you want to run in parallel:

import threading

import Foo
import Bar

results = {}

def get_a():
    results['a'] = Foo.get_something()
a_thread = threading.Thread(target=get_a)
a_thread.start()

def get_b():
    results['b'] = Bar.get_something_else()
b_thread = threading.Thread(target=get_b)
b_thread.start()

Then to require both of them to have finished, use .join() on both:

a_thread.join()
b_thread.join()

at which point your results will be in results['a'] and results['b'], so if you wanted an ordered list:

output = [results['a'], results['b']]

Note: if both tasks are inherently CPU-intensive, you might want to consider multiprocessing instead - due to Python's GIL, a given Python process will only ever use one CPU core, whereas multiprocessing can distribute the tasks to separate cores. However, it has a slightly higher overhead than threading, and thus if the tasks are less CPU-intensive, it might not be as efficient.

like image 21
Amber Avatar answered Sep 22 '22 23:09

Amber