Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Coding CUDA with C#?

Tags:

c#

cuda

People also ask

Can you use C with CUDA?

Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python.

Does Nvidia use C?

NVIDIA was very secretive about its internal development until 2013 when they started supporting open-source software development. They have released several documents ever since and most of the documents mention C as the primary programming language used in the development of the core-level software.

What language is CUDA written in?

CUDA stands for Compute Unified Device Architecture. It is an extension of C/C++ programming. CUDA is a programming language that uses the Graphical Processing Unit (GPU).


There is such a nice complete cuda 4.2 wrapper as ManagedCuda. You simply add C++ cuda project to your solution, which contains yours c# project, then you just add

call "%VS100COMNTOOLS%vsvars32.bat"
for /f %%a IN ('dir /b "$(ProjectDir)Kernels\*.cu"') do nvcc -ptx -arch sm_21 -m 64 -o "$(ProjectDir)bin\Debug\%%~na_64.ptx" "$(ProjectDir)Kernels\%%~na.cu"
for /f %%a IN ('dir /b "$(ProjectDir)Kernels\*.cu"') do nvcc -ptx -arch sm_21 -m 32 -o "$(ProjectDir)bin\Debug\%%~na.ptx" "$(ProjectDir)Kernels\%%~na.cu"

to post-build events in your c# project properties, this compiles *.ptx file and copies it in your c# project output directory.

Then you need simply create new context, load module from file, load function and work with device.

//NewContext creation
CudaContext cntxt = new  CudaContext();

//Module loading from precompiled .ptx in a project output folder
CUmodule cumodule = cntxt.LoadModule("kernel.ptx");

//_Z9addKernelPf - function name, can be found in *.ptx file
CudaKernel addWithCuda = new CudaKernel("_Z9addKernelPf", cumodule, cntxt);

//Create device array for data
CudaDeviceVariable<cData2> vec1_device = new CudaDeviceVariable<cData2>(num);            

//Create arrays with data
cData2[] vec1 = new cData2[num];

//Copy data to device
vec1_device.CopyToDevice(vec1);

//Set grid and block dimensions                       
addWithCuda.GridDimensions = new dim3(8, 1, 1);
addWithCuda.BlockDimensions = new dim3(512, 1, 1);

//Run the kernel
addWithCuda.Run(
    vec1_device.DevicePointer, 
    vec2_device.DevicePointer, 
    vec3_device.DevicePointer);

//Copy data from device
vec1_device.CopyToHost(vec1);

This has been commented on the nvidia list in the past:

http://forums.nvidia.com/index.php?showtopic=97729

it would be easy to use P/Invoke to use it in assemblies like so:

  [DllImport("nvcuda")]
  public static extern CUResult cuMemAlloc(ref CUdeviceptr dptr, uint bytesize);

I guess Hybridizer, explained here as a blog post on Nvidia is also worth to mention. Here is its related GitHub repo it seems.


There are several alternatives you can use to use CUDA in your C# applications.

  • Write a C++/CUDA library in a separate project, and use P/Invoke. The overhead of P/invokes over native calls will likely be negligible.
  • Use a CUDA wrapper such as ManagedCuda(which will expose entire CUDA API). You won't have to write your DLLImports by hand for the entire CUDA runtime API (which is convenient). Unfortunely, you will still have to write your own CUDA code in a separate project.
  • (recommended) You can use free/opensource/proprietary compilers (which will generate cuda (either source or binary) from your c# code.

You can find several of them online : have a look at this answer for example.