Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is GPU and SIMD likely to be implemented in .NET / Java VMs?

For some time now, mainstream compute hardware has sported SIMD instructions (MMX, SSE, 3D-Now, etc) and more recently we're seeing AMD bringing 480-stream GPUs into the same die as the CPU.

Functional languages like F#, Scala and Clojure are also gaining traction, with one common attraction being how much easier concurrent programming is in these languages.

Are there any plans for the Java VM or .NET CLR to start providing access to parallel compute hardware resources, so that functional languages can mature to leverage the hardware?

It seems as though the VMs are currently the bottleneck against high performance computing, with SIMD and GPU access being delegated the 3rd party libraries and post-compilers (tidepowered.net, OpenTK, ScalaCL, Brahma, etc, etc.)

Does anyone know of any plans / roadmaps on the part of Microsoft / Oracle / Open-Source Community to being their VMs up-to-date with the new hardware and programming paradigms?

Is there a good reason why vendors are being so sluggish on the uptake?

Edit:

To Address feedback so far, it's true that GPU programming is complex and, done wrong, worsens performance. But it's well known that parallelism is the future of computing - so the crux of this question is that it doesn't help for hardware and programming languages to embrace a parallel paradigm if the runtimes sitting between the applications and the hardware don't support it... why aren't we seeing this on the VM vendor's radars / roadmaps?

like image 286
Mark Avatar asked Sep 06 '11 09:09

Mark


People also ask

Do GPUs use SIMD?

GPU uses the SIMD paradigm, that is, the same portion of code will be executed in parallel, and applied to various elements of a data set. However, CPU also uses SIMD, and provide instruction-level parallelism.

Why is GPU called SIMD?

SIMD stands for single instruction, multiple data, as opposed to SISD, i.e. single instruction, single data corresponding to the traditional von Neumann architecture. It is a parallel processing technique exploiting data-level parallelism by performing a single operation across multiple data elements simultaneously.

What is SIMD programming?

SIMD is short for Single Instruction/Multiple Data, while the term SIMD operations refers to a computing method that enables processing of multiple data with a single instruction. In contrast, the conventional sequential approach using one instruction to process each individual data is called scalar operations.

What is SIMD optimization?

SIMD processing exploits data-level parallelism. Data-level parallelism means that the operations required to transform a set of vector elements can be performed on all elements of the vector at the same time. That is, a single instruction can be applied to multiple data elements in parallel.


2 Answers

you means JavaCL and ScalaCL? they both try to migrate CUDA/GPU programming to javavm

like image 148
DarrenWang Avatar answered Oct 09 '22 02:10

DarrenWang


The mono runtime includes support for some SIMD instructions already - see http://docs.go-mono.com/index.aspx?link=N%3aMono.Simd

For Microsoft's implementation of the CLR you can use XNA which allows you to run shaders etc. or the accelerator library https://research.microsoft.com/en-us/projects/accelerator/ which provides an interface to running GPGPU calculations

like image 32
John Palmer Avatar answered Oct 09 '22 04:10

John Palmer