Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can I utilise cores in GPU from c# WITHOUT change to code?

Tags:

c#

gpu

I realise there are several questions on this subject but I believe my angle is unique.

I have a mature C# app that I use for scientific number crunching. In the code I start 24 C# threads on my 24 HyperThread (i.e. I have 2 CPUs each with 6 cores/ 12 threads) workstation. I run Windows 7 and it handles it brilliantly - I am able to use my full processing power to get my work done.

I see that some GPUs advertise "448 cores". If I bought one of these would my c# app be able to utilise them? I mean without rewriting my code in any major way. Would the threads I start get taken up by the GPU cores instead of the CPU HyperThreads as is the case now?

FOLLOW ON QUESTION

Hi, I appreciate the answers I am getting - even if negative.

Is there any other hardware I should be thinking about (not too expensive) that would give me a large number of Cores, but would be able to run my c# code without a rewrite?

like image 217
ManInMoon Avatar asked Jan 24 '12 17:01

ManInMoon


People also ask

Can C run on GPU?

Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python.

Is CUDA C or C++?

CUDA C is essentially C/C++ with a few extensions that allow one to execute functions on the GPU using many threads in parallel.

Is GPU suitable for parallel processing?

GPUs render images more quickly than a CPU because of its parallel processing architecture, which allows it to perform multiple calculations across streams of data simultaneously.

What are GPU cores used for?

The GPU is a processor that is made up of many smaller and more specialized cores. By working together, the cores deliver massive performance when a processing task can be divided up and processed across many cores.


2 Answers

You'd need to rewrite your code really to make use of a gpu. These links might be useful:-

CUDA .NET - CUDA functionality through .NET apps.

CUDA Sharp - C# wrapper for nVidia Toolkit

These are based on the nVidia CUDA system so you'd need an nVidia card for this of course.

like image 143
Davos555 Avatar answered Sep 19 '22 18:09

Davos555


Heh... no. No way no how. Those "cores" aren't the same. To take advantage of any GPU computing, you need to write your computations in a very specific way. Try OpenCL maybe. But the answer to your question is no.


As for your edit, the only possible thing with few changes (depending on how you've currently structured it) is a processor. If you're not making general software, you could probably run 48 non-HT individual cores. Maybe that's not the bottleneck, though. You could increase your RAM to make everything generally faster to a certain point.

like image 44
Ry- Avatar answered Sep 18 '22 18:09

Ry-