There seems to be many opinions on what machine code actually is. I've heard some say it's assembly, or binary, or hex.
Is it correct to say that machine code is essentially a set of instructions for a particular processor? If so, I imagine these can be represented in binary or hexadecimal notation, or assembly. But what does the non-translated "actual" machine code look like? Is it based on the word size of the architecture? Or is hexadecimal for all intents and purposes the default representation?
What does it look like when sitting on a hard drive? What does it look like when sitting in a register? How about when it's being processed, is it simply a set of voltage changes at that point?
Machine code, also known as machine language, is the elemental language of computers. It is read by the computer's central processing unit (CPU), is composed of digital binary numbers and looks like a very long sequence of zeros and ones.
For example, on the Zilog Z80 processor, the machine code 00000101 , which causes the CPU to decrement the B processor register, would be represented in assembly language as DEC B .
If you compare C with java , C is closer to the hardware, because java does not directly operate on the system. Java is executed on the java virtual machine, which then operates on the system. C does not have such a layer between it and the system and is therefore closer to the harware.
Machine code is a computer program written in machine language. It uses the instruction set of a particular computer architecture. It is usually written in binary.
Machine code is simply binary data that corresponds to CPU instructions for a specific processor architecture.
I won't go into how it is stored too much, because that depends on where it is stored. On disk, for example, it is generally stored as a sequence of magnetized regions. Machine code is no different from other binary data in the storage aspect. If your question is more about how data is stored on a computer, you should research the various data-storage devices in a computer, like HDD, RAM, and registers, to name a few.
The easiest way to visualize how machine code is stored is to look at some in a hex editor. This shows you the binary data represented by hex numbers. For example, take the instruction:
0xEB 0xFE
This could easily be written 1110101111111110, or 60414. It depends how you want to convert binary into human-readable form.
This instruction represents an infinite loop. (This is assuming it is being run on an x86 CPU. Other CPU's could interpret it however they want.) It can be coded in assembly like this:
j:
jmp j
When you run the assembler, it takes the above code and turns it into the binary machine code above.
The instruction is really two parts. The first is what is known as the opcode, and is the 0xEB
. When this code goes into the CPU, it means: Read a byte from the program, and skip that many bytes of data. Then the CPU reads the byte 0xFE
. Since it expects a signed integer, it interprets the binary data as the number -2. The instruction is then done being read, and the instruction pointer moves forward 2 bytes. The instruction is then executed, causing the instruction pointer to move forward -2 (0xFE) bytes, which effectively sets the instruction pointer to the same value as it had when the instruction was started.
I hope this answers your question. If you are wondering about the internal workings of CPU's, read up on microcode and electronic logic gates. Basically, it's a bunch of voltage differences, such as a 1 bit being a 5 volt charge and a 0 bit being a 0 bit charge.
Like me, you seem to be curious about how computers work under the hood. I don't know enough to answer your questions well (and it's a large topic anyway), but I highly recommend Steve Gibson's "Let's Design a Computer" podcast series. Here's an excerpt from the "Machine language" transcript, to give you a flavor of it. . .
And all skipping means is, instead of adding one to the program counter, we add two, or we add one twice, which is actually how these machines worked back then. And that just causes us to skip over a jump. So essentially that means we can branch to anywhere we want to in memory or continue on our way, which gives us, even though that's very simple, that gives us enough power to allow machines to make decisions. And we've got input/output; we've got math; we've got the ability to transfer data from one location in memory to another. Those are all the essentials of the way a machine functions. That is machine language.
Now, the one layer of humanity that's put on top of that is what's called "assembly language," which is nothing but naming things. For example, you create sort of a so-called mnemonic for the different instructions. So, for example, load the accumulator would be LDA. Store the accumulator, STA. You want them to be short because you're going to be typing them a lot. Remember that you end up using lots of little instructions in order to get something done. And then the only other thing really that assembly language does, it allows you to name locations in memory.
So, for example, you might say LDA, for load accumulator, current score. And current score would simply refer to a, like a variable essentially, a location in memory that you had labeled "current score." And then if you did STA, store accumulator, new score, well, it would first load the current score into the accumulator, and then store that into a different location called new score. So really that's all we're talking about is some simple abbreviations for helping sort of remember and use these individual instructions and convenient labels for locations in memory so that you're not having to remember, oh, that's in location 329627. I mean, who can do that? So instead you just, you label that location with an English, an alphanumeric phrase of some sort, and then you refer to that location by the phrase rather than by its actual number.
And in fact you don't care what the number is. That's one of the things that the assembler will do for you is you just say I need memory called these things. And it worries about where they go because it doesn't really matter to you as long as they're consistently referred to. And that's the whole process. That's machine language and assembly language. And that's the way it was 50 years ago, and more or less that's the way it is now.
. . but he backs up even further than this and starts with transistors and logic gates. From what I can tell, here's the complete series (and the listening audience has contributed helpful diagrams in the wiki):
If anyone takes issue with anything Steve says in these episodes, the best places to provide feedback are http://www.grc.com/feedback.htm or http://www.grc.com/discussions.htm or https://twitter.com/SGgrc
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With