I've always wondered why leading zeroes (0
) are used to represent octal numbers, instead of — for example — 0o
. The use of 0o
would be just as helpful, but would not cause as many problems as leading 0
es (e.g. parseInt('08');
in JavaScript). What are the reason(s) behind this design choice?
Leading zeros are used to make ascending order of numbers correspond with alphabetical order: e.g., 11 comes alphabetically before 2, but after 02. (See, e.g., ISO 8601.)
The octal numbers, in the number system, are usually represented by binary numbers when they are grouped in pairs of three. For example, an octal number 128 is expressed as 0010102 in the binary system, where 1 is equivalent to 001 and 2 is equivalent to 010.
An integer literal that starts with 0 is an octal number, much like a number starting with 0x is a hexadecimal number.
The main advantage of using Octal numbers is that it uses less digits than decimal and Hexadecimal number system. So, it has fewer computations and less computational errors. It uses only 3 bits to represent any digit in binary and easy to convert from octal to binary and vice-versa.
All modern languages import this convention from C, which imported it from B, which imported it from BCPL.
Except BCPL used #1234
for octal and #x1234
for hexadecimal. B has departed from this convention because # was an unary operator in B (integer to floating point conversion), so #1234 could not be used, and # as a base indicator was replaced with 0.
The designers of B tried to make the syntax very compact. I guess this is the reason they did not use a two-character prefix.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With