Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a code that results in 50% branch prediction miss?

Tags:

The problem:

I'm trying to figure out how to write a code (C preffered, ASM only if there is no other solution) that would make the branch prediction miss in 50% of the cases.

So it has to be a piece of code that "is imune" to compiler optimizations related to branching and also all the HW branch prediction should not go better than 50% (tossing a coin). Even a greater challenge is being able to run the code on multiple CPU architectures and get the same 50% miss ratio.

I managed to write a code that goes to 47% branch miss ratio on an x86 platform. I'm suspecting the missing could 3% come from:

  • Program launch overhead that has branching in it (very small though)
  • Profiler overhead - Basically for each counter read an interrupt is raised so this might add additional predictable branches.
  • System calls running in the background that contain loops and predictable branching

I written my own random number generator to avoid calls to a rand whose implementation might have hidden predictable branches. It can use also rdrand when available. Latency does not matter for me.

The questions:

  1. Can I do better than my version of code? Better means getting a higher branch misspredict and same results for all CPU architectures.
  2. Can this code be predicated? What would that mean?

The code:

#include <stdio.h> #include <time.h>  #define RDRAND #define LCG_A   1103515245 #define LCG_C   22345 #define LCG_M   2147483648 #define ULL64   unsigned long long  ULL64 generated;  ULL64 rand_lcg(ULL64 seed) { #ifdef RDRAND     ULL64 result = 0;     asm volatile ("rdrand %0;" : "=r" (result));     return result; #else     return (LCG_A * seed + LCG_C) % LCG_M; #endif }  ULL64 rand_rec1() {     generated = rand_lcg(generated) % 1024;      if (generated < 512)         return generated;     else return rand_rec1(); }  ULL64 rand_rec2() {     generated = rand_lcg(generated) % 1024;      if (!(generated >= 512))         return generated;     else return rand_rec2(); }  #define BROP(num, sum)                  \     num = rand_lcg(generated);          \     asm volatile("": : :"memory");      \     if (num % 2)                        \         sum += rand_rec1();             \     else                                \         sum -= rand_rec2();  #define BROP5(num, sum)     BROP(num, sum) BROP(num, sum) BROP(num, sum) BROP(num, sum) BROP(num, sum) #define BROP25(num, sum)    BROP5(num, sum) BROP5(num, sum) BROP5(num, sum) BROP5(num, sum) BROP5(num, sum) #define BROP100(num, sum)   BROP25(num, sum) BROP25(num, sum) BROP25(num, sum) BROP25(num, sum)  int main() {     int i = 0;     int iterations = 500000;         ULL64 num = 0;     ULL64 sum = 0;      generated = rand_lcg(0) % 54321;      for (i = 0; i < iterations; i++)     {         BROP100(num, sum);         // ... repeat the line above 10 times     }      printf("Sum = %llu\n", sum); } 

Update v1:

Following the suggestion of usr, I generated various patterns by varying the LCG_C parameter from the command line in a script. I was able to go to 49.67% BP miss. That is enough for my purpose and I have the methodology to produce this on various architectures.

like image 883
VAndrei Avatar asked Mar 10 '15 10:03

VAndrei


People also ask

What are the two types of branch prediction techniques available?

Branch prediction technique can be of two types: Static Branch Prediction Technique. Dynamic Branch Prediction Technique.

What if branch prediction is wrong?

Once the branch is decided, if the prediction was correct nothing happens but if the prediction was wrong, the pipeline simply switches to processing the correct instruction at the next clock.

What will be the penalty if branch predictor predicts wrong result?

When the prediction is right, that is, when the branch is not taken, there is no penalty to be paid. On the other hand, when the prediction is wrong, one bubble is created and the next instruction is fetched from the target address.


2 Answers

If you know how the branch predictor works you can get to 100% misprediction. Just take the expected prediction of the predictor each time and do the opposite. The problem is that we don't know how it is implemented.

I have read that typical predictors are able to predict patters such as 0,1,0,1 and so on. But I'm sure there is a limit to how long the pattern can be. My suggestion would be to try each and every pattern of a given length (such as 4) and see which one comes closest to your target percentage. You should be able to target both 50% and 100% and come very close. This profiling needs to be done for each platform once or at runtime.

I doubt that 3% of the total number of branches are in system code like you said. The kernel does not take 3% overhead on purely CPU bound user code. Increase the scheduling priority to the maximum.

You can take the RNG out of the game by generating random data once and iterating over the same data many times. The branch predictor is unlikely to detect this (although it clearly could).

I would implement this by filling a bool[1 << 20] with a zero-one pattern like I described. Then, you can run the following loop over it many times:

int sum0 = 0, sum1 = 0; for (...) {  //unroll this a lot  if (array[i]) sum0++;  else sum1++; } //print both sums here to make sure the computation is not being optimized out 

You'll need to examine the disassembly to make sure that the compiler did not do anything clever.

I don't see why the complicated setup that you have right now is necessary. The RNG can be taken out of the question and I don't see why more than this simple loop is needed. If the compiler is playing tricks you might need to mark the variables as volatile which makes the compiler (better: most compilers) treat them as if they were external function calls.

Since the RNG now no longer matters since it is almost never called you can even invoke the cryptographic RNG of your OS to get numbers that are indistinguishable (to any human) from true random numbers.

like image 174
usr Avatar answered Oct 12 '22 23:10

usr


Fill an array with bytes, and write a loop that checks each byte and branches depending on the value of the byte.

Now examine the architecture of your processor and its branch prediction very carefully. Fill the initial bytes of the array so that after examining them, the processor is in a predictable known state. From that known state, you can find out whether the next branch is predicted taken or not. Set the next byte so that the prediction is wrong. Again, find out whether the next branch is predicted taken or not, and set the next byte so that the prediction is wrong and so on.

If you disable interrupts as well (which could change the branch prediction), you can come close to 100% mispredicted branches.

As a simple case, on an old PowerPC processor with strong/weak prediction, after three taken branches it will always be in the state "strong taken" and one branch not taken changes it to "weak taken". If you now have a sequence of alternating not taken / taken branches, the prediction is always wrong and switches between weak not taken and weak taken.

This will of course only work with that particular processor. Most modern processors would see that sequence as almost 100% predictable. For example, they might use two separate predictors; one for the case "last branch was taken" and one for the case "last branch was not taken". But for such a processor, a different sequence of bytes will give the same 100% misprediction rate.

like image 25
gnasher729 Avatar answered Oct 12 '22 21:10

gnasher729