For some time now, mainstream compute hardware has sported SIMD instructions (MMX, SSE, 3D-Now, etc) and more recently we're seeing AMD bringing 480-stream GPUs into the same die as the CPU.
Functional languages like F#, Scala and Clojure are also gaining traction, with one common attraction being how much easier concurrent programming is in these languages.
Are there any plans for the Java VM or .NET CLR to start providing access to parallel compute hardware resources, so that functional languages can mature to leverage the hardware?
It seems as though the VMs are currently the bottleneck against high performance computing, with SIMD and GPU access being delegated the 3rd party libraries and post-compilers (tidepowered.net, OpenTK, ScalaCL, Brahma, etc, etc.)
Does anyone know of any plans / roadmaps on the part of Microsoft / Oracle / Open-Source Community to being their VMs up-to-date with the new hardware and programming paradigms?
Is there a good reason why vendors are being so sluggish on the uptake?
Edit:
To Address feedback so far, it's true that GPU programming is complex and, done wrong, worsens performance. But it's well known that parallelism is the future of computing - so the crux of this question is that it doesn't help for hardware and programming languages to embrace a parallel paradigm if the runtimes sitting between the applications and the hardware don't support it... why aren't we seeing this on the VM vendor's radars / roadmaps?
GPU uses the SIMD paradigm, that is, the same portion of code will be executed in parallel, and applied to various elements of a data set. However, CPU also uses SIMD, and provide instruction-level parallelism.
SIMD stands for single instruction, multiple data, as opposed to SISD, i.e. single instruction, single data corresponding to the traditional von Neumann architecture. It is a parallel processing technique exploiting data-level parallelism by performing a single operation across multiple data elements simultaneously.
SIMD is short for Single Instruction/Multiple Data, while the term SIMD operations refers to a computing method that enables processing of multiple data with a single instruction. In contrast, the conventional sequential approach using one instruction to process each individual data is called scalar operations.
SIMD processing exploits data-level parallelism. Data-level parallelism means that the operations required to transform a set of vector elements can be performed on all elements of the vector at the same time. That is, a single instruction can be applied to multiple data elements in parallel.
you means JavaCL and ScalaCL? they both try to migrate CUDA/GPU programming to javavm
The mono runtime includes support for some SIMD instructions already - see http://docs.go-mono.com/index.aspx?link=N%3aMono.Simd
For Microsoft's implementation of the CLR you can use XNA which allows you to run shaders etc. or the accelerator library https://research.microsoft.com/en-us/projects/accelerator/ which provides an interface to running GPGPU calculations
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With