Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python multithreading and multiprocessing to speed up loops

I implemented function with 4 for loops and it take a long time to compute, so i'm trying to speed this up by using multithreading.

My function looks like this:

def loops(start, end):  
    for h in range(start, end):
        for w in range(0, width):
            for h2 in range(h-radius, h+radius):
                for w2 in range(w-radius, w+radius):
                    compute_something()

with multithreading i tried this:

t1 = threading.Thread(target=loops, args=(0, 150))
t2 = threading.Thread(target=loops, args=(150, 300))
t1.start()
t2.start()
t1.join()
t2.join()

there is no change in computation time if i just used main thread with 0-300

i also used joblib multiprocessing like this:

inputs = range(300)
Parallel(n_jobs=core_num)(delayed(loops)(i) for i in inputs)

in this case computation time was even higher

am i doing something wrong or is there different way to spped up for loops by multithreading?
range here is just an example, size of loops is usually 2000*1800*6*6 a it takes +5mins to finish what i'm doing

like image 957
mereth Avatar asked Mar 12 '26 22:03

mereth


1 Answers

You won't get any speedups in python using multi threading because of GIL. It's a mutex for interpreter. You need to use multiprocessing package. It's included in standard distribution.

from multiprocessing import Pool

pool = Pool()

Then just use map or starmap. You can find docs here. But first consider if you can vectorize your code using numpy, it'd be faster that way.

like image 109
Piotr Rarus Avatar answered Mar 15 '26 11:03

Piotr Rarus



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!