Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is there huge performance hit in 2048x2048 versus 2047x2047 array multiplication?

People also ask

Why is Matlab so fast in matrix multiplication?

Because MATLAB is a programming language at first developed for numerical linear algebra (matrix manipulations), which has libraries especially developed for matrix multiplications.

Why is matrix multiplication faster?

As matrices grow larger, the number of multiplications needed to find their product increases much faster than the number of additions.

How do you increase the speed of a matrix multiplication?

If you really want to speed up your own version you need to use vectorization and parallelization. Transpose one of your inputs so you can access rows of one matrix and columns of the other both as contiguous memory.

What happens when you multiply two arrays?

C = A . * B multiplies arrays A and B by multiplying corresponding elements. The sizes of A and B must be the same or be compatible. If the sizes of A and B are compatible, then the two arrays implicitly expand to match each other.


This probably has do with conflicts in your L2 cache.

Cache misses on matice1 are not the problem because they are accessed sequentially. However for matice2 if a full column fits in L2 (i.e when you access matice2[0, 0], matice2[1, 0], matice2[2, 0] ... etc, nothing gets evicted) than there is no problem with cache misses with matice2 either.

Now to go deeper in how caches works, if byte address of your variable is X, than the cache line for it would be (X >> 6) & (L - 1). Where L is total number of cache lines in your cache. L is always power of 2. The six comes from fact that 2^6 == 64 bytes is standard size of cache line.

Now what does this mean? Well it means that if I have address X and address Y and (X >> 6) - (Y >> 6) is divisible by L (i.e. some large power of 2), they will be stored in the same cacheline.

Now to go back to your problem what is the difference between 2048 and 2049,

when 2048 is your size:

if you take &matice2[x, k] and &matice2[y, k] the difference (&matice2[x, k] >> 6) - (&matice2[y,k] >> 6) will be divisible by 2048 * 4 (size of float). So a large power of 2.

Thus depending on size of your L2 you will have a lot of cache line conflicts, and only utilize small portion of your L2 to store a column, thus you wont actually be able to store full column in your cache, thus you will get bad performance.

When size is 2049, then the difference is 2049 * 4 which is not power of 2 thus you will have less conflicts and your column will safely fit into your cache.

Now to test this theory there are couple things you can do:

Allocate your array matice2 array like this matice2 [razmor, 4096], and run with razmor = 1024, 1025 or any size, and you should see very bad performance compared to what you had before. This is because you forcefully align all columns to conflict with each other.

Then try matice2 [razmor, 4097] and run it with any size and you should see much better performance.


Probably a caching effect. With matrix dimensions that are large powers of two, and a cache size that is also a power of two, you can end up only using a small fraction of your L1 cache, slowing things down a lot. Naive matrix multiplication is usually constrained by the need to fetch data into the cache. Optimized algorithms using tiling (or cache-oblivious algorithms) focus on making better use of L1 cache.

If you time other pairs (2^n-1,2^n) I expect you'll see similar effects.

To explain more fully, in the inner loop, where you access matice2[m,k], it's likely that matice2[m,k] and matice2[m+1,k] are offset from each other by 2048*sizeof(float) and thus map to the same index in the L1 cache. With an N-way associative cache you will have typically have 1-8 cache locations for all of these. Thus almost all of those accesses will trigger an L1 cache eviction, and fetching of data from a slower cache or main memory.


This may have to do with the size of your cpu cache. If 2 rows of the matrix matrix do not fit, then you will loose time swapping in elements from RAM. The extra 4095 elements may just be enough to prevent rows from fitting.

In your case, 2 rows for 2047 2d matrices fall within 16KB of memory (assuming 32 bit types). For example, if you have an L1 cache (closest to the cpu on the bus) of 64KB, then you can fit at least 4 rows (of 2047 * 32) into the cache at once. With the longer rows if there is any padding required that pushes the pairs of rows beyond 16KB, then things start to get messy. Also, each time you 'miss' the cache, swapping in data from another cache or main memory delays things.

My guess is that the variance in run times you're seeing with the different sized matrices is affected by how effectively the operating system can make use of the available cache (and some combinations are just problematic). Of course this is all a gross simplification on my part.


Louis Brandy wrote two blog posts analyzing exactly this issue:

More Cache Craziness and Computational Performance - A beginners case study with some interesting statistics and attempts to explain the behavior in more detail, it does indeed come down to cache size limitations.


Given that the time is dropping at larger sizes wouldn't it be more likely to be cache conflicts, especially with powers of 2 for the problematic matrix sizes? I am no expert on caching issues, but excellent info on cache related performance issues here.


As you are accessing the matice2 array vertically, it will be swapped in and out of the cache a lot more. If you mirror the array diagonally, so that you can access it using [k,m] instead of [m,k], the code will run a lot faster.

I tested this for 1024x1024 matrices, and it is about twice as fast. For 2048x2048 matrices it's about ten times faster.


Cache Aliasing

Or cache thrashing, if I can coin a term.

Caches work by indexing with low order bits and tagging with high order bits.

Imaging that your cache has 4 words and your matrix is 4 x 4. When a column is accessed and the row is any power of two in length, then each column element in memory will map to the same cache element.

A power-of-two-plus-one is actually about optimum for this problem. Each new column element will map to the next cache slot exactly as if accessing by row.

In real life, a tag covers multiple sequentially increasing addresses which will cache several adjacent elements in a row. By offsetting the bucket that each new row maps to, traversing the column doesn't replace the previous entry. When the next column is traversed, the entire cache will be filled with different rows and each row section that fit into the cache will hit for several columns.

Since the cache is vastly faster than DRAM (mostly by virtue of being on-chip) hit rate is everything.