Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cython Numpy code not faster than pure python

First I know that there are many similarly themed question on SO, but I can't find a solution after a day of searching, reading, and testing.

I have a python function which calculates the pairwise correlations of a numpy ndarray (m x n). I was orginally doing this purely in numpy but the function also computed the reciprocal pairs (i.e. as well as calculating the the correlation betwen rows A and B of the matrix, it calculated the correlation between rows B and A too.) So I took a slightly different approach that is about twice as fast for matrices of large m (realistic sizes for my problem are m ~ 8000).

This was great but still a tad slow, as there will be many such matrices, and to do them all will take a long time. So I started investigating cython as a way to speed things up. I understand from what I've read that cython won't really speed up numpy all that much. Is this true, or is there something I am missing?

I think the bottlenecks below are the np.sqrt, np.dot, the call to the ndarray's .T method and np.absolute. I've seen people use sqrt from libc.math to replace the np.sqrt, so I suppose my first question is, are the similar functions for the other methods in libc.math that I can use? I am afraid that I am completely and utterly unfamiliar with C/C++/C# or any of the C family languages, so this typing and cython business are very new territory to me, apologies if the reason/solution is obvious.

Failing that, any ideas about what I could do to get some performance gains?

Below are my pyx code, the setup code, and the call to the pyx function. I don't know if it's important, but when I call python setup build_ext --inplace It works but there are a lot warnings which I don't really understand. Could these also be a reason why I am not seeing a speed improvement?

Any help is very much appreciated, and sorry for the super long post.

setup.py

from distutils.core import setup
from distutils.extension import Extension
import numpy
from Cython.Distutils import build_ext


setup(
    cmdclass = {'build_ext': build_ext},
    ext_modules = [Extension("calcBrownCombinedP", 
                            ["calcBrownCombinedP.pyx"], 
                            include_dirs=[numpy.get_include()])]
)

and the ouput of setup:

>python setup.py build_ext --inplace

running build_ext
cythoning calcBrownCombinedP.pyx to calcBrownCombinedP.c
building 'calcBrownCombinedP' extension
C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -mdll -O -Wall -IC:\Anaconda\lib\site-packages\numpy\core\include -IC:\Anaconda\include -IC:\Anaconda\PC -c calcBrownCombinedP.c -o build\temp.win-amd64-2.7\Release\calcbrowncombinedp.o
In file included from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ndarraytypes.h:1728:0,
                 from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ndarrayobject.h:17,
                 from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/arrayobject.h:15,
                 from calcBrownCombinedP.c:340:
C:\Anaconda\lib\site-packages\numpy\core\include/numpy/npy_deprecated_api.h:8:9: note: #pragma message: C:\Anaconda\lib\site-packages\numpy\core\include/numpy/npy_deprecated_api.h(8) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
calcBrownCombinedP.c: In function '__Pyx_RaiseTooManyValuesError':
calcBrownCombinedP.c:4473:18: warning: unknown conversion type character 'z' in format [-Wformat]
calcBrownCombinedP.c:4473:18: warning: too many arguments for format [-Wformat-extra-args]
calcBrownCombinedP.c: In function '__Pyx_RaiseNeedMoreValuesError':
calcBrownCombinedP.c:4479:18: warning: unknown conversion type character 'z' in format [-Wformat]
calcBrownCombinedP.c:4479:18: warning: format '%s' expects argument of type 'char *', but argument 3 has type 'Py_ssize_t' [-Wformat]
calcBrownCombinedP.c:4479:18: warning: too many arguments for format [-Wformat-extra-args]
In file included from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ndarrayobject.h:26:0,
                 from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/arrayobject.h:15,
                 from calcBrownCombinedP.c:340:
calcBrownCombinedP.c: At top level:
C:\Anaconda\lib\site-packages\numpy\core\include/numpy/__multiarray_api.h:1594:1: warning: '_import_array' defined but not used [-Wunused-function]
In file included from C:\Anaconda\lib\site-packages\numpy\core\include/numpy/ufuncobject.h:311:0,
                 from calcBrownCombinedP.c:341:
C:\Anaconda\lib\site-packages\numpy\core\include/numpy/__ufunc_api.h:236:1: warning: '_import_umath' defined but not used [-Wunused-function]
writing build\temp.win-amd64-2.7\Release\calcBrownCombinedP.def
C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -shared -s build\temp.win-amd64-2.7\Release\calcbrowncombinedp.o build\temp.win-amd64-2.7\Release\calcBrownCombinedP.def -LC:\Anaconda\libs -LC:\Anaconda\PCbuild\amd64 -lpython27 -lmsvcr90 -o C:\cygwin64\home\Davy\SNPsets\src\calcBrownCombinedP.pyd

the pyx code - 'calcBrownCombinedP.pyx'

import numpy as np
cimport numpy as np
from scipy import stats
DTYPE = np.int
ctypedef np.int_t DTYPE_t

def calcBrownCombinedP(np.ndarray genotypeArray):
    cdef int nSNPs, i
    cdef np.ndarray ms, datam, datass, d, rs, temp
    cdef float runningSum, sigmaSq, E, df 
    nSNPs = genotypeArray.shape[0]
    ms = genotypeArray.mean(axis=1)[(slice(None,None,None),None)]
    datam = genotypeArray - ms
    datass = np.sqrt(stats.ss(datam,axis=1)) 
    runningSum = 0
    for i in xrange(nSNPs):
        temp = np.dot(datam[i:],datam[i].T)
        d = (datass[i:]*datass[i])
        rs = temp / d
        rs = np.absolute(rs)[1:]
        runningSum += sum(rs*(3.25+(0.75*rs)))

    sigmaSq = 4*nSNPs+2*runningSum

    E = 2*nSNPs

    df = (2*(E*E))/sigmaSq

    runningSum = sigmaSq/(2*E)
    return runningSum

The code that tests the above against some pure python - 'test.py'

import numpy as np
from scipy import stats
import random
import time
from calcBrownCombinedP import calcBrownCombinedP
from PycalcBrownCombinedP import PycalcBrownCombinedP

ms = [10,50,100,500,1000,5000]

for m in ms:
    print '---testing implentation with m = {0}---'.format(m)    
    genotypeArray = np.empty((m,20),dtype=int)

    for i in xrange(m):
        genotypeArray[i] = [random.randint(0,2) for j in xrange(20)] 

    print genotypeArray.shape 


    start = time.time()
    print calcBrownCombinedP(genotypeArray)
    print 'cython implementation took {0}'.format(time.time() - start)

    start = time.time()
    print PycalcBrownCombinedP(genotypeArray)
    print 'python implementation took {0}'.format(time.time() - start)

and the ouput of that code is:

---testing implentation with m = 10---
(10L, 20L)
2.13660168648
cython implementation took 0.000999927520752
2.13660167749
python implementation took 0.000999927520752
---testing implentation with m = 50---
(50L, 20L)
8.82721138
cython implementation took 0.00399994850159
8.82721130234
python implementation took 0.00500011444092
---testing implentation with m = 100---
(100L, 20L)
16.7438983917
cython implementation took 0.0139999389648
16.7438965333
python implementation took 0.0120000839233
---testing implentation with m = 500---
(500L, 20L)
80.5343856812
cython implementation took 0.183000087738
80.5343694046
python implementation took 0.161000013351
---testing implentation with m = 1000---
(1000L, 20L)
160.122573853
cython implementation took 0.615000009537
160.122491308
python implementation took 0.598000049591
---testing implentation with m = 5000---
(5000L, 20L)
799.813842773
cython implementation took 10.7159998417
799.813880445
python implementation took 11.2510001659

Lastly, the pure python implementation 'PycalcBrownCombinedP.py'

import numpy as np
from scipy import stats
def PycalcBrownCombinedP(genotypeArray):
    nSNPs = genotypeArray.shape[0]
    ms = genotypeArray.mean(axis=1)[(slice(None,None,None),None)]
    datam = genotypeArray - ms
    datass = np.sqrt(stats.ss(datam,axis=1)) 
    runningSum = 0
    for i in xrange(nSNPs):
        temp = np.dot(datam[i:],datam[i].T)
        d = (datass[i:]*datass[i])
        rs = temp / d
        rs = np.absolute(rs)[1:]
        runningSum += sum(rs*(3.25+(0.75*rs)))

    sigmaSq = 4*nSNPs+2*runningSum

    E = 2*nSNPs

    df = (2*(E*E))/sigmaSq

    runningSum = sigmaSq/(2*E)
    return runningSum
like image 624
Davy Kavanagh Avatar asked Feb 09 '14 11:02

Davy Kavanagh


1 Answers

Profiling with kernprof shows the bottleneck is the last line of the loop:

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
<snip>
    16      5000      6145280   1229.1     86.6          runningSum += sum(rs*(3.25+(0.75*rs)))

This is no surprise as you're using the Python built-in function sum in both the Python and Cython versions. Switching to np.sum speeds the code up by a factor of 4.5 when the input array has shape (5000, 20).

If a small loss in accuracy is alright, then you can leverage linear algebra to speed up the final line further:

np.sum(rs * (3.25 + 0.75 * rs))

is really a vector dot product, i.e.

np.dot(rs, 3.25 + 0.75 * rs)

This is still suboptimal as it loops over rs three times and constructs two rs-sized temporary arrays. Using elementary algebra, this expression can be rewritten as

3.25 * np.sum(rs) +  .75 * np.dot(rs, rs)

which not only gives the original result without the round-off error in the previous version, but only loops over rs twice and uses constant memory.(*)

The bottleneck is now np.dot, so installing a better BLAS library is going to buy you more than rewriting the whole thing in Cython.

(*) Or logarithmic memory in the very latest NumPy, which has a recursive reimplementation of np.sum that is faster than the old iterative one.

like image 171
Fred Foo Avatar answered Oct 23 '22 11:10

Fred Foo