Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What exactly is a machine cycle?

I'm kinda confused about the exact definition of a "machine cycle".

For example, some source online say that it is:

The four steps which the CPU carries out for each machine language instruction: fetch, decode, execute, and store.

While the book Programming the Z80, which supposedly has a good reputation, says:

We have seen that all instructions are executed in three phases: FETCH, DECODE, EXECUTE. We now need to introduce some definitions. Each of these phases will require several clock cycles. The Z80 executes each phase in one or more logical cycles, called a "machine cycle."

Now what confuses me here is that - from what I understood - the first one is basically saying: A machine cycle is a fetch-decode-execute cycle, while the book is basically saying: A machine cycle is the one - or more - clock cycle that happens in each fetch, decode or execute cycle individually.

So, what is what?

like image 726
Abdelrahman Eid Avatar asked Aug 06 '16 16:08

Abdelrahman Eid


People also ask

What is the machine cycle and how does it work?

The machine cycle is a four-process cycle that includes reading and interpreting the machine language, executing the code, and then storing that code. The process of cycling instructions may also be known as the E-cycle (execution cycle), I-cycle (instruction cycle), fetch-decode-execute cycle, or fetch-execute cycle.

What are the 4 steps of the machine cycle?

The machine cycle has four processes i.e. fetch process, decode process, execute process and store process. All these processes are necessary for the instruction execution by the processor.

What is the machine cycle also called?

A machine cycle, also called a processor cycle or a instruction cycle, is the basic operation performed by a central processing unit (CPU). A CPU is the main logic unit of a computer.


2 Answers

z80 has two different concepts of "cycle". The distinction matters, because z80 is a multi-cycle architecture, and it uses both multiple clock cycles per "step" and (often) multiple "steps" per instruction.

The "steps" are called machine cycles (M-cycles), they do "high level" tasks such as reading from memory, doing an ALU operation, etc. Complex instructions such as inc (iy+42) take many machine cycles, for decoding the prefix, the main opcode, reading the offset, adding it to iy, doing the increment, and writing back the result. Conditional jumps even have a varying number of M-cycles, they omit the M-cycle that does that actual jumping if the condition is false.

Each M-cycle then takes multiple (3 to 6) clock cycles (aka T-cycle or T-state, this terminology has mostly died unless referring to old multicycle processors). For example, a memory read will take 3 cycles, an opcode decode typically takes 4, some internal operations take 5, and 16bit increments seem to extend the OCF by an other 2 cycles somehow.

That's all quite z80-specific.

Elsewhere the term "machine cycle" has been used to refer some sort of "complete trip" of an instruction from start to finish. That's not what would be meant in a z80 context.

like image 179
harold Avatar answered Oct 13 '22 00:10

harold


The four steps which the CPU carries out for each machine language instruction: fetch, decode, execute, and store.

Each of these steps would typically happen in a different cycle. "Execute" is a complicated process involving some sub-steps and plenty of transistors, but for most instructions it can still be done in a single cycle. That's part of the reason for putting the boundaries between parts of the CPU in those places.


the first one is basically saying: A machine cycle is a fetch-decode-execute cycle

No, it isn't saying that. That's completely wrong. Old non-pipelined CPUs usually have a throughput of less than one instruction per cycle, because each instruction takes multiple cycles before the next one can start.


A CPU clock cycle, or machine cycle, is a cycle of low voltage to high voltage and back. https://en.wikipedia.org/wiki/Clock_signal. The CPU uses this clock input to synchronize the various steps.

The max clock speed for a CPU is limited by the slowest thing that needs to happen in a single clock cycle. e.g. maybe the decode stage has a lot of gate delays, so if the clock ran any faster the result wouldn't be latched into the latch between that stage and the next stage when the clock transitioned from high to low.

A pipelined CPU will try to keep its various stages occupied at the same time, instead of waiting until one instruction is finished executing before decoding (or maybe even fetching) the next one. See https://en.wikipedia.org/wiki/Classic_RISC_pipeline

like image 3
Peter Cordes Avatar answered Oct 13 '22 00:10

Peter Cordes