Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sharing the GPU between OpenCL capable programs

Tags:

opencl

Is there a method to share the GPU between two separate OpenCL capable programs, or more specifically between two separate processes that simultaneously both require the GPU to execute OpenCL kernels? If so, how is this done?

like image 603
Chris Avatar asked Jul 29 '10 12:07

Chris


People also ask

What is OpenCL capable GPU?

OpenCL™ (Open Computing Language) is a low-level API for heterogeneous computing that runs on CUDA-powered GPUs. Using the OpenCL API, developers can launch compute kernels written using a limited subset of the C programming language on a GPU.

Can OpenCL be used for graphics?

Besides the obvious use-case of a Graphics Processing Unit (GPU), namely rendering 3D objects, it is also possible to perform general-purpose computations using frameworks like OpenCL or CUDA.

Is OpenCL CPU or GPU?

OpenCL devices can consist of GPUs, CPUs, and special accelerators. The Pavilion laptop used for development has two devices — one CPU device representing the triple-core Phenom II, and one GPU device representing the dual-processor Mobility Radeon HD.


2 Answers

It depends what you call sharing.

In general, you can create 2 processes that both create an OpenCL device, on the same GPU. It's then the driver/OS/GPU's responsibility to make sure things just work.

That said, most implementations will time-slice the GPU execution to make that happen (just like it happens for graphics).

I sense this is not exactly what you're after though. Can you expand your question with a use case ?

like image 60
Bahbar Avatar answered Sep 16 '22 22:09

Bahbar


Current GPUs (except NVidia's Fermi) do not support simultaneous execution of more than one kernel. Moreover, to this date GPUs do not support preemptive multitasking; it's completely cooperative! A kernel's execution cannot be suspended and continued later on. So the granularity of any time-based GPU sharing depends on the kernels' execution times.

If you have multiple programs running that require GPU access, you should therefore make sure that your kernels have short runtimes (< 100ms is a rule of thumb), so that GPU time can be timesliced among the kernels that want GPU cycles. It's also important to do that since otherwise the host system's graphics will become very unresponsive as they need GPU access too. This can go as far that a kernel in an endless or long loop will apparently crash the system.

like image 35
dietr Avatar answered Sep 17 '22 22:09

dietr