Why do most computer programming languages not allow binary numbers to be used like decimal or hexadecimal?
Why not allow binary numbers?
Bonus Points!... What languages do allow binary numbers?
Edit
Wow! - So the majority think it's because of brevity and poor old "waves" thinks it's due to the technical aspects of the binary representation.
Of course it's possible. Machine language is a programming language, like any other. Any Turing-complete language can be translated into any other Turing-complete language, given enough time and effort. For instruction sets, the process is called Binary Translation.
The short answer is no. Computer languages are not written in binary. However, the compiler or interpret eventually translate the language into binary through a number of steps.
a computer(the machine) operates on electricity. thus,it can understand only electricity signals that are just two ON and OFF or high voltage or low voltage. thus it needs a language that uses just 2 unique symbols to represent these 2 states of electricity.
Binary is still the primary language for computers and used with electronics and computer hardware for the following reasons. It is a simple and elegant design. Binary's 0 and 1 method is quick to detect an electrical signal's off (false) or on (true) state.
Because hexadecimal (and rarely octal) literals are more compact and people using them usually can convert between hexadecimal and binary faster than deciphering a binary number.
Python 2.6+ allows binary literals, and so do Ruby and Java 7, where you can use the underscore to make byte boundaries obvious. For example, the hexadedecimal value 0x1b2a
can now be written as 0b00011011_00101010
.
In C++0x with user defined literals binary numbers will be supported, I'm not sure if it will be part of the standard but at the worst you'll be able to enable it yourself
int operator "" _B(int i);
assert( 1010_B == 10);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With