Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Would a better graphics card or more cores make Mathematica faster?

In general, can Mathematica automatically (i.e. without writing code specifically for this) exploit GPU hardware and/or parallelize built-in operations across multiple cores?

For example, for drawing a single very CPU-intensive plot or solving a very CPU-intensive equation, would upgrading the graphics hardware result in speed-up? Would upgrading to a CPU with more cores speed things up? (I realize that more cores mean I could solve more equations in parallel but I'm curious about the single-equation case)

Just trying to get a handle on how Mathematica exploits hardware.

like image 336
nicolaskruchten Avatar asked Dec 26 '11 20:12

nicolaskruchten


People also ask

Does Mathematica use GPU?

Mathematica's CUDALink simplified the use of the GPU within Mathematica by introducing dozens of functions to tackle areas ranging from image processing to linear algebra. CUDALink also allows the user to load their own CUDA functions into the Mathematica kernel.

What does GPU stand for?

What does GPU stand for? Graphics processing unit, a specialized processor originally designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications.


1 Answers

I wouldn't say Mathematica does automatically GPU or Paralell-CPU computing, at least in general. Since you need do something with paralell kernels, then you should initialize more kernels and/or upload CUDALink or OpenCLLink and use specific Mathematica functionality to exploit the potential of CPU and/or GPU.

For example, I haven't got very powerful graphics card (NVIDIA GeForce 9400 GT) but we can test how CUDALink works. First I have to upload CUDALink :

Needs["CUDALink`"] 

I am going to test multiplication of large matrices. I choose a random matrix 5000 x 5000 of real numbers in range (-1,1) :

M = RandomReal[{-1,1}, {5000, 5000}];

Now we can check the computing times without GPU support

  In[4]:= AbsoluteTiming[ Dot[M,M]; ]

  Out[4]= {26.3780000, Null}

and with GPU support

In[5]:= AbsoluteTiming[ CUDADot[M, M]; ]

Out[5]= {6.6090000, Null}

In this case we obtained a performance speed-up roughly of factor 4, by using CUDADot instead of Dot.

Edit

To add an example of parallel CPU acceleration (on a dual-core machine) I choose all prime numbers in range [2^300, 2^300 +10^6]. First without parallelizing :

In[139]:= AbsoluteTiming[ Select[ Range[ 2^300, 2^300 + 10^6], PrimeQ ]; ]

Out[139]= {121.0860000, Null}

while using Parallelize[expr], which evaluates expr using automatic parallelization

In[141]:= AbsoluteTiming[ Parallelize[ Select[ Range[ 2^300, 2^300 + 10^6], PrimeQ ] ]; ]

Out[141]= {63.8650000, Null}

As one could expect we've got almost two times faster evaluation.

like image 147
Artes Avatar answered Oct 02 '22 19:10

Artes