I understand how the code is compiled to assembly, and that assembly is a 1:1
replacement with binary codes. Can somebody help me understand how binary is connected to the hardware? How is the binary physically read and run? How does an if statement work in the hardware?
From google searches I'm thinking that maybe my question title should be "how is binary data put on a line of a bus" but I wasn't sure.
Thanks.
A transistor is a tiny switch that is activated by the electronic signals it receives. The digits 1 and 0 used in binary reflect the on and off states of a transistor. Computer programs are sets of instructions. Each instruction is translated into machine code - simple binary codes that activate the CPU .
Computers can represent numbers using binary code in the form of digital 1s and 0s inside the central processing unit (CPU) and RAM. These digital numbers are electrical signals that are either on or off inside the CPU or RAM.
Computers use binary, or base 2, to store data. The binary system has only two possible digits, the 0 and the 1. To understand binary, we will start with something everyone is familiar with: decimal, or base 10. Base 10 has ten digits, from 0 through 9.
When the computer needs to convert the binary data back to human-readable text, it's the reverse of the previously shown process. For example, a computer may convert the binary 01101000 to the decimal value 104 which it knows is the letter h using the ASCII standard conversion.
I think this is actually a fun question. I would say "here's how to build a computer in a few easy steps".
Start with some simple logic circuits, such as AND, OR, NOT, and a flip-flop. A flip-flop is a pair of transistors arranged so that if one is ON, the other is OFF, or vice-versa. That way it can "remember" one bit of information, so you can think of it as a storing a single binary digit. Some input lines can put it in one state or the other, and thus "write" to it.
You can store a bigger number by having a bunch of flip-flops, and call it a "register". For example, if you have four flip-flops in a register, there are 16 possible combinations, so you can think of it as holding a number from 0 to 15.
Skipping ahead a little bit, you can buy a "memory chip". What that is is a good number of registers, like say 16 of them. It has 4 wires coming in (the "address" wires), and it has 4 wires coming out (the "data" wires). So a number from 0 to 15 can come in as an address, and that selects one of the 16 registers, whose value is presented on the output data wires (thus "reading" it). Another few wires can cause data to come IN on the data wires to cause numbers to be put into ("written") the register.
Now suppose you have an external 4-bit register (call it R), and a bit of circuitry, so that it
Depending on the numbers that have been pre-loaded into the memory, you can see that this thing will cycle around through a series of numeric addresses, because the number at each address determines what the next address will be.
Now, you can embellish this thing in a lot of ways. You can expand the memory to have 5-bits of address (32 registers). Then if one of the address lines is connected to the outside world, it will do different things depending on the outside world. That's a "finite-state-machine".
You can replace the R register with a simple counter, and call it a "program counter". You can take the data coming out of the memory and call it an "instruction", and use some of its bits to read other memory addresses and load a set of arithmetic registers. You can use some to control whether the R register simply increments, or maybe gets a new address stored in it. That's called "jumping".
Of course, this is a very simple computer, but that's roughly how they started out.
A complete answer to your question would encompass a book, and a fairly thick one at that.
When you say "code" I'm assuming you're referring to a high level compiled language, like C++. Usually, a compiler translates this code into machine language, or binary as you state in your question. We'll neatly avoid all discussion of managed vs. unmanaged code, p-code, etc. That is, we're just talking about compilers that target specific processors/operating systems. Java, for example, compiles into a pseudo-code called bytecode. We're also going to avoid the whole matter of link editing, or linking, which is how multiple source modules get compiled into machine language then bound together into a single executable program.
Okay, now that we've covered most of what we're not going to cover, here's what usually happens. And by "usually", I mean most compiled languages in a DOS, Linux or Windows environment. The source code is translated into machine language, which is written out to an executable file. This executable file contains, more or less, an image of what the program should look like in memory. When you tell the operating system to run your program, the OS's equivalent of a "Load and Go" executes. What that means is that the memory image in the executable file is loaded into memory, then the operating system does a machine language JUMP to the first instruction in the program. The CPU then blindly follows the instructions from thereon, until an EXIT is encountered.
This whole JUMP... EXIT nastiness is a drastic oversimplification for modern OS's. As you can imagine, if the CPU were to follow, with blind obedience, the instructions in a program that's gone astray, the computer would crash... or worse. Such was the fate of many an errant program in the early days, and a prime contributor to many a BSOD.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With