I would like to know why, in the real world, compilers produce Assembly code, rather than microinstructions.
If you're already bound to one architecture, why not go one step further and free the processor from having to turn assembly-code into microinstructions at Runtime?
I think perhaps there's a implementation bottleneck somewhere but I haven't found anything on Google.
EDIT by microinstructions I mean: if you assembly instruction is ADD(R1,R2), the microinstructions would be. Load R1 to the ALU, load R2 to the ALU, execute the operation, load the results back onto R1. Another way to see this is to equate one microinstruction to one clock-cycle.
I was under the impression that microinstruction was the 'official' name. Apparently there's some mileage variation here.
FA
Compilers don't produce micro-instructions because processors don't execute micro-instructions. They are an implementation detail of the chip, not something exposed outside the chip. There's no way to provide micro-instructions to a chip.
Because an x86 CPU doesn't execute micro operations, it executes opcodes. You can not create a binary image that contains micro operations since there is no way to encode them in a way that the CPU understands.
What you are suggesting is basically a new RISC-style instruction set for x86 CPUs. The reason that isn't happening is because it would break compatibility with the vast amount of applications and operating systems written for the x86 instruction set.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With