Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

A Cache Efficient Matrix Transpose Program?

So the obvious way to transpose a matrix is to use :

  for( int i = 0; i < n; i++ )      for( int j = 0; j < n; j++ )        destination[j+i*n] = source[i+j*n]; 

but I want something that will take advantage of locality and cache blocking. I was looking it up and can't find code that would do this, but I'm told it should be a very simple modification to the original. Any ideas?

Edit: I have a 2000x2000 matrix, and I want to know how can I change the code using two for loops, basically splitting the matrix into blocks that I transpose individually, say 2x2 blocks, or 40x40 blocks, and see which block size is most efficient.

Edit2: The matrices are stored in column major order, that is to say for a matrix

a1 a2     a3 a4 

is stored as a1 a3 a2 a4.

like image 335
user635832 Avatar asked Mar 04 '11 23:03

user635832


People also ask

What is matrix transpose in Python?

Transpose of a matrix is the interchanging of rows and columns. It is denoted as X' . The element at ith row and jth column in X will be placed at jth row and ith column in X' . So if X is a 3x2 matrix, X' will be a 2x3 matrix.


1 Answers

You're probably going to want four loops - two to iterate over the blocks, and then another two to perform the transpose-copy of a single block. Assuming for simplicity a block size that divides the size of the matrix, something like this I think, although I'd want to draw some pictures on the backs of envelopes to be sure:

for (int i = 0; i < n; i += blocksize) {     for (int j = 0; j < n; j += blocksize) {         // transpose the block beginning at [i,j]         for (int k = i; k < i + blocksize; ++k) {             for (int l = j; l < j + blocksize; ++l) {                 dst[k + l*n] = src[l + k*n];             }         }     } } 

An important further insight is that there's actually a cache-oblivious algorithm for this (see http://en.wikipedia.org/wiki/Cache-oblivious_algorithm, which uses this exact problem as an example). The informal definition of "cache-oblivious" is that you don't need to experiment tweaking any parameters (in this case the blocksize) in order to hit good/optimal cache performance. The solution in this case is to transpose by recursively dividing the matrix in half, and transposing the halves into their correct position in the destination.

Whatever the cache size actually is, this recursion takes advantage of it. I expect there's a bit of extra management overhead compared with your strategy, which is to use performance experiments to, in effect, jump straight to the point in the recursion at which the cache really kicks in, and go no further. On the other hand, your performance experiments might give you an answer that works on your machine but not on your customers' machines.

like image 50
Steve Jessop Avatar answered Oct 04 '22 14:10

Steve Jessop