Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Where does CUDA allocate the stack frame for kernels?

Tags:

stack

cuda

My kernel call fails with "out of memory". It makes significant usage of the stack frame and I was wondering if this is the reason for its failure.

When invoking nvcc with --ptxas-options=-v it print the following profile information:

    150352 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 59 registers, 40 bytes cmem[0]

Hardware: GTX480, sm20, 1.5GB device memory, 48KB shared memory/multiprocessor.

My question is where is the stack frame allocated: In shared, global memory, constant memory, ..?

I tried with 1 thread per block, as well as with 32 threads per block. Same "out of memory".

Another issue: One can only enlarge the number of threads resident to one multiprocessor if the total numbers of registers do not exceed the number of available registers at the multiprocessor (32k for my card). Does something similar apply to the stack frame size?

like image 373
ritter Avatar asked Oct 18 '11 16:10

ritter


1 Answers

Stack is allocated in local memory. Allocation is per physical thread (GTX480: 15 SM * 1536 threads/SM = 23040 threads). You are requesting 150,352 bytes/thread => ~3.4 GB of stack space. CUDA may reduce the maximum physical threads per launch if the size is that high. The CUDA language is not designed to have a large per thread stack.

In terms of registers GTX480 is limited to 63 registers per thread and 32K registers per SM.

like image 64
Greg Smith Avatar answered Oct 12 '22 05:10

Greg Smith