Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python: rewrite a looping numpy math function to run on GPU

Tags:

Can someone help me rewrite this one function (the doTheMath function) to do the calculations on the GPU? I used a few good days now trying to get my head around it but to no result. I wonder maybe somebody can help me rewrite this function in whatever way you may seem fit as log as I gives the same result at the end. I tried to use @jit from numba but for some reason it is actually much slower than running the code as usual. With a huge sample size, the goal is to decrease the execution time considerably so naturally I believe the GPU is the fastest way to do it.

I'll explain a little what is actually happening. The real data, which looks almost identical as the sample data created in the code below is divided into sample sizes of approx 5.000.000 rows each sample or around 150MB per file. In total there are around 600.000.000 rows or 20GB of data. I must loop through this data, sample by sample and then row by row in each sample, take the last 2000 (or another) rows as of each line and run the doTheMath function which returns a result. That result is then saved back to the hardrive where I can do some other things with it with another program. As you can see below, I do not need all of the results of all the rows, only those bigger than a specific amount. If I run my function as it is right now in python I get about 62seconds per 1.000.000 rows. This is a very long time considering all the data and how fast it should be done with.

I must mention that I upload the real data file by file to the RAM with the help of data = joblib.load(file) so uploading the data is not the problem as it takes only about 0.29 seconds per file. Once uploaded I run the entire code below. What takes the longest time is the doTheMath function. I am willing to give all of my 500 reputation points I have on stackoverflow as a reward for somebody willing to help me rewrite this simple code to run on the GPU. My interest is specifically in the GPU, I really want to see how it is done on this problem at hand.

EDIT/UPDATE 1: Here is a link to a small sample of the real data: data_csv.zip About 102000 rows of real data1 and 2000 rows for real data2a and data2b. Use minimumLimit = 400 on the real sample data

EDIT/UPDATE 2: For those following this post here is a short summary of the answers below. Up until now we have 4 answers to the original solution. The one offered by @Divakar are just tweaks to the original code. Of the two tweaks only the first one is actually applicable to this problem, the second one is a good tweak but does not apply here. Out of the other three answers, two of them are CPU based solutions and one tensorflow-GPU try. The Tensorflow-GPU by Paul Panzer seems to be promising but when i actually run it on the GPU it is slower than the original, so the code still needs improvement.

The other two CPU based solutions are submitted by @PaulPanzer (a pure numpy solution) and @MSeifert (a numba solution). Both solutions give very good results and both process data extremely fast compared to the original code. Of the two the one submitted by Paul Panzer is faster. It processes about 1.000.000 rows in about 3 seconds. The only problem is with smaller batchSizes, this can be overcome by either switching to the numba solution offered by MSeifert, or even the original code after all the tweaks that have been discussed below.

I am very happy and thankful to @PaulPanzer and @MSeifert for the work they did on their answers. Still, since this is a question about a GPU based solution, i am waiting to see if anybody is willing to give it a try on a GPU version and see how much faster the data can be processed on the GPU when compared to the current CPU solutions. If there will be no other answers outperforming @PaulPanzer's pure numpy solution then i'll accept his answer as the right one and gets the bounty :)

EDIT/UPDATE 3: @Divakar has posted a new answer with a solution for the GPU. After my testings on real data, the speed is not even comparable to the CPU counterpart solutions. The GPU processes about 5.000.000 in about 1,5 seconds. This is incredible :) I am very excited about the GPU solution and i thank @Divakar for posting it. As well as i thank @PaulPanzer and @MSeifert for their CPU solutions :) Now my research continues with an incredible speed due to the GPU :)

import pandas as pd import numpy as np import time  def doTheMath(tmpData1, data2a, data2b):     A = tmpData1[:, 0]     B = tmpData1[:,1]     C = tmpData1[:,2]     D = tmpData1[:,3]     Bmax = B.max()     Cmin  = C.min()     dif = (Bmax - Cmin)     abcd = ((((A - Cmin) / dif) + ((B - Cmin) / dif) + ((C - Cmin) / dif) + ((D - Cmin) / dif)) / 4)     return np.where(((abcd <= data2a) & (abcd >= data2b)), 1, 0).sum()  #Declare variables batchSize = 2000 sampleSize = 5000000 resultArray = [] minimumLimit = 490 #use 400 on the real sample data   #Create Random Sample Data data1 = np.matrix(np.random.uniform(1, 100, (sampleSize + batchSize, 4))) data2a = np.matrix(np.random.uniform(0, 1, (batchSize, 1))) #upper limit data2b = np.matrix(np.random.uniform(0, 1, (batchSize, 1))) #lower limit #approx. half of data2a will be smaller than data2b, but that is only in the sample data because it is randomly generated, NOT the real data. The real data2a is always higher than data2b.   #Loop through the data t0 = time.time() for rowNr in  range(data1.shape[0]):     tmp_df = data1[rowNr:rowNr + batchSize] #rolling window     if(tmp_df.shape[0] == batchSize):         result = doTheMath(tmp_df, data2a, data2b)         if (result >= minimumLimit):             resultArray.append([rowNr , result]) print('Runtime:', time.time() - t0)  #Save data results resultArray = np.array(resultArray) print(resultArray[:,1].sum()) resultArray = pd.DataFrame({'index':resultArray[:,0], 'result':resultArray[:,1]}) resultArray.to_csv("Result Array.csv", sep=';') 

The PC specs I am working on:

GTX970(4gb) video card;  i7-4790K CPU 4.00Ghz;  16GB RAM; a SSD drive  running Windows 7;  

As a side question, would a second video card in SLI help on this problem?

like image 207
RaduS Avatar asked Jan 31 '17 12:01

RaduS


People also ask

Can NumPy run on GPU?

NumPy doesn't natively support GPU s. However, there are tools and libraries to run NumPy on GPU s. Numba is a Python compiler that can compile Python code to run on multicore CPUs and CUDA-enabled GPU s. Numba also understands NumPy and generates optimized compiled code.

How do I run a Python program on a GPU?

Installation: First, make sure that Nvidia drivers are upto date also you can install cudatoolkit explicitly from here. then install Anaconda add anaconda to the environment while installing. After completion of all the installations run the following commands in the command prompt.

Can NumPy run on CUDA?

What is CuPy? CuPy is a library that implements Numpy arrays on Nvidia GPUs by leveraging the CUDA GPU library. With that implementation, superior parallel speedup can be achieved due to the many CUDA cores GPUs have. CuPy's interface is a mirror of Numpy and in most cases, it can be used as a direct replacement.


1 Answers

Tweak #1

Its usually advised to vectorize things when working with NumPy arrays. But with very large arrays, I think you are out of options there. So, to boost performance, a minor tweak is possible to optimize on the last step of summing.

We could replace the step that makes an array of 1s and 0s and does summing :

np.where(((abcd <= data2a) & (abcd >= data2b)), 1, 0).sum() 

with np.count_nonzero that works efficiently to count True values in a boolean array, instead of converting to 1s and 0s -

np.count_nonzero((abcd <= data2a) & (abcd >= data2b)) 

Runtime test -

In [45]: abcd = np.random.randint(11,99,(10000))  In [46]: data2a = np.random.randint(11,99,(10000))  In [47]: data2b = np.random.randint(11,99,(10000))  In [48]: %timeit np.where(((abcd <= data2a) & (abcd >= data2b)), 1, 0).sum() 10000 loops, best of 3: 81.8 µs per loop  In [49]: %timeit np.count_nonzero((abcd <= data2a) & (abcd >= data2b)) 10000 loops, best of 3: 28.8 µs per loop 

Tweak #2

Use a pre-computed reciprocal when dealing with cases that undergo implicit broadcasting. Some more info here. Thus, store reciprocal of dif and use that instead at the step :

((((A  - Cmin) / dif) + ((B  - Cmin) / dif) + ... 

Sample test -

In [52]: A = np.random.rand(10000)  In [53]: dif = 0.5  In [54]: %timeit A/dif 10000 loops, best of 3: 25.8 µs per loop  In [55]: %timeit A*(1.0/dif) 100000 loops, best of 3: 7.94 µs per loop 

You have four places using division by dif. So, hopefully this would bring out noticeable boost there too!

like image 58
Divakar Avatar answered Sep 29 '22 22:09

Divakar