Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reducing Number of Registers Used in CUDA Kernel

I have a kernel which uses 17 registers, reducing it to 16 would bring me 100% occupancy. My question is: are there methods that can be used to reduce the number or registers used, excluding completely rewriting my algorithms in a different manner. I have always kind of assumed the compiler is a lot smarter than I am, so for example I often use extra variables for clarity's sake alone. Am I wrong in this thinking?

Please note: I do know about the --max_registers (or whatever the syntax is) flag, but the use of local memory would be more detrimental than a 25% lower occupancy (I should test this)

like image 813
zenna Avatar asked Feb 17 '10 19:02

zenna


3 Answers

Occupancy can be a little misleading and 100% occupancy should not be your primary target. If you can get fully coalesced accesses to global memory then on a high end GPU 50% occupancy will be sufficient to hide the latency to global memory (for floats, even lower for doubles). Check out the Advanced CUDA C presentation from GTC last year for more information on this topic.

In your case, you should measure performance both with and without maxrregcount set to 16. The latency to local memory should be hidden as a result of having sufficient threads, assuming you don't random access into local arrays (which would result in non-coalesced accesses).

To answer you specific question about reducing registers, post the code for more detailed answers! Understanding how compilers work in general may help, but remember that nvcc is an optimising compiler with a large parameter space, so minimising register count has to be balanced with overall performance.

like image 180
Tom Avatar answered Oct 19 '22 12:10

Tom


It's really hard to say, nvcc compiler is not very smart in my opinion.
You can try obvious things, for example using short instead of int, passing and using variables by reference (e.g.&variable), unrolling loops, using templates (as in C++). If you have divisions, transcendental functions, been applied in sequence, try to make them as a loop. Try to get rid of conditionals, possibly replacing them with redundant computations.

If you post some code, maybe you will get specific answers.

like image 6
Anycorn Avatar answered Oct 19 '22 10:10

Anycorn


Utilizing shared memory as cache may lead less register usage and prevent register spilling to local memory...

Think that the kernel calculates some values and these calculated values are used by all of the threads,

__global__ void kernel(...) {
    int idx = threadIdx.x + blockDim.x * blockIdx.x;
    int id0 = blockDim.x * blockIdx.x;

    int reg = id0 * ...;
    int reg0 = reg * a / x + y;


    ...

    int val =  reg + reg0 + 2 * idx;

    output[idx] = val > 10;
}

So, instead of keeping reg and reg0 as registers and making them possibily spill out to local memory (global memory), we may use shared memory.

__global__ void kernel(...) {
    __shared__ int cache[10];

    int idx = threadIdx.x + blockDim.x * blockIdx.x;

    if (threadIdx.x == 0) {
      int id0 = blockDim.x * blockIdx.x;

      cache[0] = id0 * ...;
      cache[1] = cache[0] * a / x + y;
    }
    __syncthreads();


    ...

    int val =  cache[0] + cache[1] + 2 * idx;

    output[idx] = val > 10;
}

Take a look at this paper for further information..

like image 4
phoad Avatar answered Oct 19 '22 12:10

phoad