I know how computers translate numbers to binary. But what I don't understand is that I've heard that computers translate everything (words, instructions, ...) to binary, not just numbers. How is this possible?
Could you show me some examples? Like how does a computer translate the letter "A" to binary?
And when computers see a binary code, how can they know if that long string of 0s and 1s represents a number or a word or an instruction?
.
Exemple:
Let's say that a computer programmer encoded the letter "Z" so that it translates to this binary string: 11011001111011010111
So when the computer will encounter this binary string, it will translate it to the letter "Z".
But what happens when we ask this computer "what is the product of 709 by 1259?"
The computer would answer us "892631". But that number, when translated to binary, is 11011001111011010111.
So how would it make a difference between "Z" and "892631"?
.
Please note that I don't know much about computer science, so please explain everything in simple terms.
Computers convert text and other data into binary with an assigned ASCII (American Standard Code for Information Interexchange) value. Once the ASCII value is known, that value can be converted to binary.
Computers use binary to store data. Not only because it's a reliable way of storing the data, but computers only understand 1s and 0s — binary. A computer's main memory consists of transistors that switch between high and low voltage levels — sometimes 5V, sometimes 0.
A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc.
That is an excellent question, and one which would take years, and several PhDs to fully explain. I can offer you a simplistic answer, but to fully understand you will have to do MUCH more research. Might I suggest some free online classes from MIT on the subject here.
At the lowest level, the letter A and the number 65 are in fact stored using the same sequence of 0's and 1's. 1000001 if I'm not mistaken.
The computer then decides what it is when it grabs it from memory. This means that letters can be displayed as numbers, and vise versa.
The way the computer knows what it's looking for is that the programmer tells it what its looking for. The programmer says I want a number stored at such and such location, and the computer goes and looks for it.
Lets step up a level, because rarely do programmes program at such a low level any more. other programs (usually compilers which take code like C++ and turn it into something the computer can understand) Ensure that the location we are accessing is infact what we said it is. They have extra information that tells them that this particular set of 1's and 0's is actually a floating point type (has a decimal point) whereas this set is and integer(no decimal point)
Then other types build on those types, bigger integers, or floating point, or strings of characters, and again the compilers enforce the types.
This is an oversimplification, and I realize that everything here isn't exactly correct, but it'll get you on the right path. You might check out some of these topics to get a much better idea:
How instructions are differentiated from data?
http://en.wikipedia.org/wiki/Computer_data_storage
How is data, address and Instruction differentiated in Processor/Register/memory?
http://en.wikipedia.org/wiki/Reference_(computer_science)
Hope this clears things up a little. Feel free to ask for clarification!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With