Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use GPU for mathematics [closed]

Tags:

c#

math

gpu

I am looking at utilising the GPU for crunching some equations but cannot figure out how I can access it from C#. I know that the XNA and DirectX frameworks allow you to use shaders in order to access the GPU, but how would I go about accessing it without these frameworks?

like image 682
Neil Knight Avatar asked May 05 '11 08:05

Neil Knight


People also ask

How does a GPU do matrix multiplication?

In your case of matrix multiplication. You can parallelize the computations, Because GPU have much more threads and in each thread you have multiple blocks. So a lot of computations are parallelized, resulting quick computations.

Can PHP use GPU?

PHP alone does not have the ability to leverage the GPU.

What else can I use my GPU for?

GPUs can be used for video editing, 3D graphics rendering, and much more. With a high processing throughput, GPUs can process more data than their Central Processing Unit (CPU) counterparts, making them uniquely suited for highly demanding tasks such as machine learning and cryptocurrency mining.

Can you run C++ on GPU?

Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python.


2 Answers

I haven't done it from C#, but basically you use the CUDA (assuming you're using an nVidia card here, of course) SDK and CUDA toolkit to pull it off.

nVidia has ported (or written?) a BLAS implementation for use on CUDA-capable devices. They've provided plenty of examples for how to do number crunching, although you'll have to figure out how you're going to pull it off from C#. My bet is, you're going to have to write some stuff in un-managed C or C++ and link with it.

If you're not hung-up on using C#, take a look at Theano. It might be a bit overkill for your needs, since they're building a framework for doing machine learning on GPUs from Python, but ... it works, and works very well.

like image 150
Brian Vandenberg Avatar answered Oct 14 '22 11:10

Brian Vandenberg


If your GPU is NVidia, you can use CUDA.

There is an example here, that explain all the chain, including some C/C++ code: CUDA integration with C#

And there is a library called CUDA.NET available here: CUDA.NET

If your GPU is ATI, then there is ATI Stream. .NET support is less clear to me on this. Maybe the Open Toolkit Library has it, through OpenCL support.

And finally, there is an Microsoft Research project called "Accelerator" which has a managed wrapper which should work on any hardware (provided it supports DirectX 9).

like image 35
Simon Mourier Avatar answered Oct 14 '22 13:10

Simon Mourier