I am interested in using F# for numerical computation. How can I access the GPU using NVIDIA's CUDA standart under F#?
(recommended) You can use free/opensource/proprietary compilers (which will generate cuda (either source or binary) from your c# code.
CUDA is a programming language that uses the Graphical Processing Unit (GPU). It is a parallel computing platform and an API (Application Programming Interface) model, Compute Unified Device Architecture was developed by Nvidia. This allows computations to be performed in parallel while providing well-formed speed.
Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python.
The CUDA programming model allows scaling software transparently with an increasing number of processor cores in GPUs. You can program applications using CUDA language abstractions. Any problem or application can be divided into small independent problems and solved independently among these CUDA blocks.
I agree with jasper that the easiest option currently is to use Accelerator from Microsoft Research. I wrote a series of articles about using from F#. A simple and direct introduction, Game of Life example, more advanced example using quotations and an example of using advanced quotation features. Satnam Singh's blog is also a great resource with some F# demos.
One problem with current graphics cards is that they do not support integers (as a result, Accelerator supports them only when running using optimized x64 parallel engine). Also, current graphics cards don't imeplement floating point numbers according to the IEEE standards - they are trying to be faster by doing a bit of "guessing", which doesn't matter when calculating triangle position, but could be an issue if you're dealing with financial calculations. (Accelerator can use various targets, so you're safe if you're using x64 parallel engine).
As far as I know, DirectCompute will require a precise implementation of floating point arithmetics as well as direct support for integers, so that may be a good choice in the future (or if Accelerator eventually starts using DirectCompute as their engine).
Probably only hardcore GPU geeks like me have heard about it. Tidepowerd -- dead link has made GPGPU possible for CIL-based languages (such as F#, C#, VB.NET, whatever). On the other hand you can do the same for sole F# language with a Quotation-to-GPU runtime/API (looking forward to see someone implement that). This is something Agent Smith has bloged about or that is also mentioned in F# expert 1.0 book (Language Oriented Programming chapter) AFAIK.
Agent Smith (ok, sorry for that) is speaking about NVIDIA Cg. But you can do same using HLSL DirectCompute shaders or OpenCL C99.. PTX (low level NVIDIA IL), CAL-IL (low level AMD/ATI IL)...
As an alternative, you could consider using DirectCompute. The three big GPU APIs: CUDA, OpenCL and DirectCompute, are all very similiar. DirectCompute can easily be accessed from F# via SlimDX, a .NET wrapper for DirectX.
Accelerator from MS allows you to leverage the GPUs, so can do something like this, though you cant use CUDA.
You might look into CUDA.NET. It would let you use CUDA straight from F#. It can be found here: http://www.hoopoe-cloud.com/Solutions/CUDA.NET/Default.aspx
The other usual alternative for using CUDA from managed code is to encapsulate the CUDA functionality in a native DLL and then either P/Invoke that or write a C++/CLI wrapper around that, which you then use from e.g. your F# program.
For the sake of documentation (it is an old question with answers that do not cover the current technology landscape), if you had to write GPU/CUDA apps today, then another option to consider is aleagpu.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With