I was writing code that looked like the following...
if(denominator == 0){ return false; } int result = value / denominator;
... when I thought about branching behavior in the CPU.
https://stackoverflow.com/a/11227902/620863 This answer says that the CPU will try to correctly guess which way a branch will go, and head down that branch only to stop if it discovers it guessed the branch incorrectly.
But if the CPU predicts the branch above incorrectly, it would divide by zero in the following instructions. This doesn't happen though, and I was wondering why? Does the CPU actually execute a division by zero and wait to see if the branch is correct before doing anything, or can it tell that it shouldn't continue in these situations? What's going on?
The branch predictor may, for example, recognize that the conditional jump is taken more often than not, or that it is taken every second time. Branch prediction is not the same as branch target prediction. Branch prediction attempts to guess whether a conditional jump will be taken or not.
If it guessed wrong it has to undo anything the speculatively executed instructions might have done, clear the pipeline and begin sending down the instructions it was supposed be executing. This results in the same delay as if it hadn't made a prediction at all.
Branch prediction buffers contain prediction about whether the next branch will be taken (T) or not (NT), but it does not supply the target PC value. A Branch Target Buffer (BTB) does this. The control unit looks up the branch target buffer during the “F” phase.
The CPU is free to do whatever it wants, when speculatively executing a branch based on a prediction. But it needs to do so in a way that's transparent to the user. So it may stage a "division by zero" fault, but this should be invisible if the branch prediction turns out wrong. By the same logic, it may stage writes to memory, but it may not actually commit them.
As a CPU designer, I wouldn't bother predicting past such a fault. That's probably not worth it. The fault probably means a bad prediction, and that will resolve itself soon enough.
This freedom is a good thing. Consider a simple std::accumulate
loop. The branch predictor will correctly predict a lot of jumps (for (auto current = begin, current != end; ++current)
which usually jumps back to the begin of loop), and there are a lot of memory reads which may potentially fault (sum += *current
). But a CPU that would refuse to read a memory value until the previous branch has been resolved would be a lot slower. And yet a mispredicted jump at the end of the loop might very well cause a harmless memory fault, as the predicted branch tries to read past the buffer. This needs to be resolved without a visible fault.
Not exactly. The system is not allowed to execute the instructions in the wrong branch even if it does a bad guess, or more exactly if it does it must not be visible. The basic is :
For the analogy with the referenced post, the train has to stop immediately at the junction if the switch was not in correct position, it cannot go to next station on the wrong path, or if it cannot stop before that, no passengers shall be allowed to go in or out of the train
(*) Itanium processors would be able to process many paths in parallel. Intel's logic was that they can build wide processors (which do a lot in parallel) but they were struggling with the effective instruction rate. By speculatively executing both branches, they used a lot of hardware (I think they could do it several levels deep, running 2^N branches) but it did help the apparent single core speed as it in effect always predicted the correct branch in one HW unit - Credits should go to MSalters for that precision
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With