With .Net 4.0 coming up, and the new parallel extensions, I wondered if the CLR will be able to optimize and push some calculations to the GPU? Or if any library which can help with the task exists?
I'm no GPU programming expert at all, so forgive me if this is a silly question. Maybe the CLR doesn't support interfacing to the GPUs instruction-set? Are they too primitive, or simply out of scope?
Thanks in advance.
[EDIT] Just to clarify: I know about CUDA and similar libraries, but I want to know if there's a pure .Net solution, and if so, can it work behind the scenes for you, or do you need to do explicit coding?
There is nothing integrated in .NET related to that.
But I think this is what you're looking for (can use in .NET) ;) :
http://research.microsoft.com/en-us/projects/Accelerator/
also FYI: http://brahma.ananthonline.net/
CLR only targets CPUs (Microsoft's Research OS Helios is tasked with support of GPUs at the CIL level through heterogeneous execution).
So the only alternative for now is to use one of those libraries:
It provides a simplified programming of GPUs via a high-level data-parallel library (project is in its v2.0 now).
There's a good article here: http://tomasp.net/articles/accelerator-intro.aspx
Uses C# 3.0's LINQ syntax to specify streaming transformation of data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With