Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scaling the rows of a matrix with CUDA

Tags:

c

math

cuda

blas

In some computations on the GPU, I need to scale the rows in a matrix so that all the elements in a given row sum to 1.

| a1,1 a1,2 ... a1,N |    | alpha1*a1,1 alpha1*a1,2 ... alpha1*a1,N |
| a2,1 a2,2 ... a2,N | => | alpha2*a2,1 alpha2*a2,2 ... alpha2*a2,N |
| .            .   |    | .                                .    |
| aN,1 aN,2 ... aN,N |    | alphaN*aN,1 alphaN*aN,2 ... alphaN*aN,N |

where

alphai =  1.0/(ai,1 + ai,2 + ... + ai,N)

I need the vector of alpha's, and the scaled matrix and I would like to do this in as few blas calls as possible. The code is going to run on nvidia CUDA hardware. Does anyone know of any smart way to do this?

like image 601
Martin Kristiansen Avatar asked Dec 27 '22 06:12

Martin Kristiansen


2 Answers

Cublas 5.0 introduced a blas-like routine called cublas(Type)dgmm which is the multiplication of a matrix by a diagonal matrix (represented by a vector).

There is a left option ( which will scale the rows) or a right option that will scale the column.

Please refer to CUBLAS 5.0 documentation for details.

So in your problem, you need to create a vector containing all the alpha on the GPU and use cublasdgmm with the left option.

like image 74
Philippe Vandermersch Avatar answered Jan 05 '23 18:01

Philippe Vandermersch


If you use BLAS gemv with a unit vector, the result will a vector of the reciprocal of scaling factors (1/alpha) you need. That is the easy part.

Applying the factors row wise is a bit harder, because standard BLAS doesn't have anything like a Hadamard product operator you could use. Also because you are mentioning BLAS, I presume you are using column major order storage for your matrices, which is not so straightforward for row wise operations. The really slow way to do it would be to BLAS scal on each row with a pitch, but that would require one BLAS call per row and the pitched memory access will kill performance because of the effect on coalescing and L1 cache coherency.

My suggestion would be to use your own kernel for the second operation. It doesn't have to be all that complex, perhaps only something like this:

template<typename T>
__global__ void rowscale(T * X, const int M, const int N, const int LDA,
                             const T * ralpha)
{
    for(int row=threadIdx.x; row<M; row+=gridDim.x) {
        const T rscale = 1./ralpha[row]; 
        for(int col=blockIdx.x; col<N; col+=blockDim.x) 
            X[row+col*LDA] *= rscale;
    }
}

That just has a bunch of blocks stepping through the rows columnwise, scaling as they go along. Should work for any sized column major ordered matrix. Memory access should be coalesced, but depending on how worried about performance you are, there are a number of optimization you could try. It at least gives a general idea of what to do.

like image 45
talonmies Avatar answered Jan 05 '23 19:01

talonmies