Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Creating a cuda stream on each host thread (multi-threaded CPU)

I have a multi-threaded CPU and I would like each thread of the CPU to be able to launch a seperate CUDA stream. The seperate CPU threads will be doing different things at different times so there is a chance that they won't overlap but if they do launch a CUDA kernel at the same time I would like it to continue to run concurrently.

I'm pretty sure this is possible because in the CUDA Toolkit documentation section 3.2.5.5. It says "A stream is a sequence of commands (possibly issued by different host threads)..."

So if I want to implement this I would do something like

void main(int CPU_ThreadID) {
    cudaStream_t *stream;
    cudaStreamCreate(&stream);

    int *d_a;
    int *a;
    cudaMalloc((void**)&d_a, 100*sizeof(int));
    cudaMallocHost((void**)&a, 100*8*sizeof(int));
    cudaMemcpyAsync(d_a, a[100*CPU_ThreadID], 100*size(int), cudaMemcpyHostToDevice, stream);
    sum<<<100,32,0,stream>>>(d_a);

    cudaStreamDestroy(stream);
}

That is just a simple example. If I know there are only 8 CPU Threads then I know at most 8 streams will be created. Is this the proper way to do this? Will this run concurrently if two or more different host threads reach this code around the same time? Thanks for any help!

Edit:

I corrected some of the syntax issues in the code block and put in the cudaMemcpyAsync as sgar91 suggested.

like image 422
Miggy Avatar asked Oct 21 '22 22:10

Miggy


1 Answers

It really looks to me like you are proposing a multi-process application, not multithreaded. You don't mention which threading architecture you have in mind, nor even an OS, but the threading architectures I know of don't posit a thread routine called "main", and you haven't shown any preamble to the thread code.

A multi-process environment will generally create one device context per process, which will inhibit fine-grained concurrency.

Even if that's just an oversight, I would point out that a multi-threaded application should establish a GPU context on the desired device before threads are spawned.

Each thread can then issue a cudaSetDevice(0); or similar call, which should cause each thread to pick up the established context on the indicated device.

Once that is in place, you should be able to issue commands to the desired streams from whichever threads you like.

You may wish to refer to the cudaOpenMP sample code. Although it omits the streams concepts, it demonstrates a multi-threaded app with the potential for multiple threads to issue commands to the same device (and could be extended to the same stream)

Whether or not kernels happen to run concurrently or not after the above issues have been addressed is a separate issue. Concurrent kernel execution has a number of requirements, and the kernels themselves must have compatible resource requirements (blocks, shared memory, registers, etc.), which generally implies "small" kernels.

like image 194
Robert Crovella Avatar answered Oct 30 '22 14:10

Robert Crovella