Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

numpy performance differences between Linux and Windows

I am trying to run sklearn.decomposition.TruncatedSVD() on 2 different computers and understand the performance differences.

computer 1 (Windows 7, physical computer)

OS Name Microsoft Windows 7 Professional
System Type x64-based PC
Processor   Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3401 Mhz, 4 Core(s), 
8 Logical Installed Physical Memory (RAM)   8.00 GB
Total Physical Memory   7.89 GB

computer 2 (Debian, on amazon cloud)

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8

width: 64 bits
capabilities: ldt16 vsyscall32
*-core
   description: Motherboard
   physical id: 0
*-memory
   description: System memory
   physical id: 0
   size: 29GiB
*-cpu
   product: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
   vendor: Intel Corp.
   physical id: 1
   bus info: cpu@0
   width: 64 bits

computer 3 (Windows 2008R2, on amazon cloud)

OS Name Microsoft Windows Server 2008 R2 Datacenter
Version 6.1.7601 Service Pack 1 Build 7601
System Type x64-based PC
Processor   Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz, 2500 Mhz, 
4 Core(s), 8 Logical Processor(s)
Installed Physical Memory (RAM) 30.0 GB

Both computers are running with Python 3.2 and identical sklearn, numpy, scipy versions

I ran cProfile as follows:

print(vectors.shape)
>>> (7500, 2042)

_decomp = TruncatedSVD(n_components=680, random_state=1)
global _o
_o = _decomp
cProfile.runctx('_o.fit_transform(vectors)', globals(), locals(), sort=1)

computer 1 output

>>>    833 function calls in 1.710 seconds
Ordered by: internal time

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.767    0.767    0.782    0.782 decomp_svd.py:15(svd)
    1    0.249    0.249    0.249    0.249 {method 'enable' of '_lsprof.Profiler' objects}
    1    0.183    0.183    0.183    0.183 {method 'normal' of 'mtrand.RandomState' objects}
    6    0.174    0.029    0.174    0.029 {built-in method csr_matvecs}
    6    0.123    0.021    0.123    0.021 {built-in method csc_matvecs}
    2    0.110    0.055    0.110    0.055 decomp_qr.py:14(safecall)
    1    0.035    0.035    0.035    0.035 {built-in method dot}
    1    0.020    0.020    0.589    0.589 extmath.py:185(randomized_range_finder)
    2    0.018    0.009    0.019    0.010 function_base.py:532(asarray_chkfinite)
   24    0.014    0.001    0.014    0.001 {method 'ravel' of 'numpy.ndarray' objects}
    1    0.007    0.007    0.009    0.009 twodim_base.py:427(triu)
    1    0.004    0.004    1.710    1.710 extmath.py:232(randomized_svd)

Computer 2 output

>>>    858 function calls in 40.145 seconds
Ordered by: internal time
ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    2   32.116   16.058   32.116   16.058 {built-in method dot}
    1    6.148    6.148    6.156    6.156 decomp_svd.py:15(svd)
    2    0.561    0.281    0.561    0.281 decomp_qr.py:14(safecall)
    6    0.561    0.093    0.561    0.093 {built-in method csr_matvecs}
    1    0.337    0.337    0.337    0.337 {method 'normal' of 'mtrand.RandomState' objects}
    6    0.202    0.034    0.202    0.034 {built-in method csc_matvecs}
    1    0.052    0.052    1.633    1.633 extmath.py:183(randomized_range_finder)
    1    0.045    0.045    0.054    0.054 _methods.py:73(_var)
    1    0.023    0.023    0.023    0.023 {method 'argmax' of 'numpy.ndarray' objects}
    1    0.023    0.023    0.046    0.046 extmath.py:531(svd_flip)
    1    0.016    0.016   40.145   40.145 <string>:1(<module>)
   24    0.011    0.000    0.011    0.000 {method 'ravel' of 'numpy.ndarray' objects}
    6    0.009    0.002    0.009    0.002 {method 'reduce' of 'numpy.ufunc' objects}
    2    0.008    0.004    0.009    0.004 function_base.py:532(asarray_chkfinite)

computer 3 output

>>>         858 function calls in 2.223 seconds
Ordered by: internal time
ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.956    0.956    0.972    0.972 decomp_svd.py:15(svd)
    2    0.306    0.153    0.306    0.153 {built-in method dot}
    1    0.274    0.274    0.274    0.274 {method 'normal' of 'mtrand.RandomState' objects}
    6    0.205    0.034    0.205    0.034 {built-in method csr_matvecs}
    6    0.151    0.025    0.151    0.025 {built-in method csc_matvecs}
    2    0.133    0.067    0.133    0.067 decomp_qr.py:14(safecall)
    1    0.032    0.032    0.043    0.043 _methods.py:73(_var)
    1    0.030    0.030    0.030    0.030 {method 'argmax' of 'numpy.ndarray' objects}
   24    0.026    0.001    0.026    0.001 {method 'ravel' of 'numpy.ndarray' objects}
    2    0.019    0.010    0.020    0.010 function_base.py:532(asarray_chkfinite)
    1    0.019    0.019    0.773    0.773 extmath.py:183(randomized_range_finder)
    1    0.019    0.019    0.049    0.049 extmath.py:531(svd_flip)

Notice the {built-in method dot} difference from 0.035s/call to 16.058s/call, 450 times slower!!

------+---------+---------+---------+---------+---------------------------------------
ncalls| tottime | percall | cumtime | percall | filename:lineno(function)  HARDWARE
------+---------+---------+---------+---------+---------------------------------------
1     |  0.035  |  0.035  |  0.035  |  0.035  | {built-in method dot}      Computer 1
2     | 32.116  | 16.058  | 32.116  | 16.058  | {built-in method dot}      Computer 2
2     |  0.306  |  0.153  |  0.306  |  0.153  | {built-in method dot}      Computer 3

I understand that there should be performance differences, but I should it be that high?

Is there a way I can further debug this performance issue?

EDIT

I tested a new computer, computer 3 which its HW is similar to computer 2 and with different OS

The results are 0.153s/call for the {built-in method dot} still 100 times faster then Linux!!

EDIT 2

computer 1 numpy config

>>> np.__config__.show()
lapack_opt_info:
    libraries = ['mkl_lapack95_lp64', 'mkl_blas95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'libifportmd', 'mkl_lapack95_lp64', 'mkl_blas95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'libifportmd']
    library_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/lib/intel64']
    define_macros = [('SCIPY_MKL_H', None)]
    include_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/include']
blas_opt_info:
    libraries = ['mkl_lapack95_lp64', 'mkl_blas95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'libifportmd']
    library_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/lib/intel64']
    define_macros = [('SCIPY_MKL_H', None)]
    include_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/include']
openblas_info:
  NOT AVAILABLE
lapack_mkl_info:
    libraries = ['mkl_lapack95_lp64', 'mkl_blas95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'libifportmd', 'mkl_lapack95_lp64', 'mkl_blas95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'libifportmd']
    library_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/lib/intel64']
    define_macros = [('SCIPY_MKL_H', None)]
    include_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/include']
blas_mkl_info:
    libraries = ['mkl_lapack95_lp64', 'mkl_blas95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'libifportmd']
    library_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/lib/intel64']
    define_macros = [('SCIPY_MKL_H', None)]
    include_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/include']
mkl_info:
    libraries = ['mkl_lapack95_lp64', 'mkl_blas95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'libifportmd']
    library_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/lib/intel64']
    define_macros = [('SCIPY_MKL_H', None)]
    include_dirs = ['C:/Program Files (x86)/Intel/Composer XE/mkl/include']

computer 2 numpy config

>>> np.__config__.show()
lapack_info:
  NOT AVAILABLE
lapack_opt_info:
  NOT AVAILABLE
blas_info:
    libraries = ['blas']
    library_dirs = ['/usr/lib']
    language = f77
atlas_threads_info:
  NOT AVAILABLE
atlas_blas_info:
  NOT AVAILABLE
lapack_src_info:
  NOT AVAILABLE
openblas_info:
  NOT AVAILABLE
atlas_blas_threads_info:
  NOT AVAILABLE
blas_mkl_info:
  NOT AVAILABLE
blas_opt_info:
    libraries = ['blas']
    library_dirs = ['/usr/lib']
    language = f77
    define_macros = [('NO_ATLAS_INFO', 1)]
atlas_info:
  NOT AVAILABLE
lapack_mkl_info:
  NOT AVAILABLE
mkl_info:
  NOT AVAILABLE
like image 263
Ofer Helman Avatar asked Oct 28 '14 13:10

Ofer Helman


People also ask

Is NumPy slower than Python?

There is a big difference between the execution time of arrays and lists. NumPy Arrays are faster than Python Lists because of the following reasons: An array is a collection of homogeneous data-types that are stored in contiguous memory locations.

What is faster NumPy or SciPy?

NumPy is written in C and so has a faster computational speed. SciPy is written in Python and so has a slower execution speed but vast functionality.

Is Python and NumPy same?

NumPy is a Python library and is written partially in Python, but most of the parts that require fast computation are written in C or C++.


2 Answers

{built-in method dot} is the np.dot function, which is a NumPy wrapper around the CBLAS routines for matrix-matrix, matrix-vector and vector-vector multiplication. Your Windows machines uses the heavily tuned Intel MKL version of CBLAS. The Linux machine is using the slow old reference implementation.

If you install ATLAS or OpenBLAS (both available through Linux package managers) or, in fact, Intel MKL, you're likely to see massive speedups. Try sudo apt-get install libatlas-dev, check the NumPy config again to see if it picked up ATLAS, and measure again.

Once you've decided on the right CBLAS library, you may want to recompile scikit-learn. Most of it just uses NumPy for its linear algebra needs, but some algorithms (notably k-means) use CBLAS directly.

The OS has little to do with this.

like image 154
Fred Foo Avatar answered Sep 27 '22 22:09

Fred Foo


Notice the {built-in method dot} difference from 0.035s/call to 16.058s/call, 450 times slower!!

Clock speeds and cache hit ratio are two big factors to consider. The Xeon E5-2670 has a lot more cache than the Core i7-3770. And the i7-3770 has a higher peak clock speed with turbo mode. While your Xeon has a big cache in hardware, on EC2 you might be effectively sharing that cache with other customers.

Is there a way I can further debug this performance issue?

Well, you have different measurements (outputs) and multiple differences on the inputs (OS and hardware). Given the differing inputs, these different outputs are likely expected.

CPU performance counters will isolate better the effects of your algorithm's performance on different systems. The Xeons have richer performance counters, but they should all have CPU_CLK_UNHALTED and LLC_MISSES. These work by mapping the instruction pointer to events like code being executed or cache misses. Therefore you can see which parts of the code are CPU and cache bound. Since the clock speeds and cache sizes differ among your targets, you might find that one is cache bound and the other is CPU bound.

Linux has a tool called perf (sometimes perf_events). See also http://www.brendangregg.com/perf.html

On Linux and Windows you can also use Intel VTune.

like image 37
Brian Cain Avatar answered Sep 27 '22 20:09

Brian Cain