Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does a byte only have 0 to 255?

Tags:

binary

byte

Why does a byte only range from 0 to 255?

like image 327
Strawberry Avatar asked Feb 13 '11 19:02

Strawberry


People also ask

Why is 1 byte 255 and not 256?

A byte is a group of 8 bits. A bit is the most basic unit and can be either 1 or 0. A byte is not just 8 values between 0 and 1, but 256 (28) different combinations (rather permutations) ranging from 00000000 via e.g. 01010101 to 11111111 . Thus, one byte can represent a decimal number between 0(00) and 255.

Why can the byte store to 256?

A byte contains 8 bits. Each bit is either 0 or 1 and they can be combined in 256 different ways, so one byte has 256 possible values.

What is the importance of 255 in programming?

This is the maximum value representable by an eight-digit binary number, and therefore the maximum representable by an unsigned 8-bit byte (the most common size of byte, also called an octet), the smallest common variable size used in high level programming languages (bit being smaller, but rarely used for value ...

Why do 8 bits make a byte?

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.


3 Answers

Strictly speaking, the term "byte" can actually refer to a unit with other than 256 values. It's just that that's the almost universal size. From Wikipedia:

Historically, a byte was the number of bits used to encode a single character of text in a computer and it is for this reason the basic addressable element in many computer architectures.

The size of the byte has historically been hardware dependent and no definitive standards exist that mandate the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage. The popularity of major commercial computing architectures have aided in the ubiquitous acceptance of the 8-bit size. The term octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated with the term byte.

Ironically, these days the size of "a single character" is no longer consider a single byte in most cases... most commonly, the idea of a "character" is associated with Unicode, where characters can be represented in a number of different formats, but are typically either 16 bits or 32.

It would be amusing for a system which used UCS-4/UTF-32 (the direct 32-bit representation of Unicode) to designate 32 bits as a byte. The confusion caused would be spectacular.

However, assuming we take "byte" as synonymous with "octet", there are eight independent bits, each of which can be either on or off, true or false, 1 or 0, however you wish to think of it. That leads to 256 possible values, which are typically numbered 0 to 255. (That's not always the case though. For example, the designers of Java unfortunately decided to treat bytes as signed integers in the range -128 to 127.)

like image 146
Jon Skeet Avatar answered Oct 19 '22 06:10

Jon Skeet


Because a byte, by its standard definition, is 8 bits which can represent 256 values (0 through 255).

like image 38
Daniel A. White Avatar answered Oct 19 '22 06:10

Daniel A. White


Byte ≠ Octet

Why does a byte only range from 0 to 255?

It doesn’t.

An octet has 8 bits, thus allowing for 28 possibilities. A byte is ill‐defined. One should not equate the two terms, as they are not completely interchangeable. Also, wicked programming languages that support only signed characters (ʏᴏᴜ ᴋɴᴏw ᴡʜᴏ ʏᴏᴜ ᴀʀᴇ﹗) can only represent the values −128 to 127, not 0 to 255.

Big Iron takes a long time to rust.

Most but not all modern machines all have 8‑bits bytes, but that is a relatively recent phenomenon. It certainly has not always been that way. Many very early computers had 4‑bit bytes, and 6‑bit bytes were once common even comparitively recently. Both of those types of bytes hold rather fewer values than 255.

Those 6‑bit bytes could be quite convenient, since with a word size of 36 bits, six such bytes fit cleanly into one of those 36‑bit words without any jiggering. That made if very useful for holding Fieldata, used by the very popular Sperry ᴜɴɪᴠᴀᴄ computers. You can only fit 4 ᴀsᴄɪɪ characters into a 36‑bit word, not 6 Fieldata. We had 1100 series at the computing center when I was an undergraduate, but this remains true even with the modern 2200 series.

Enter ASCII

ᴀsᴄɪɪ — which was and is only a 7‑ not an 8‑bit code — paved the way for breaking out of that world. The importance of the ɪʙᴍ 360, which had 8‑bit bytes whether they held ᴀsᴄɪɪ or not, should not be understated.

Nevertheless, many machines long supported ᴅᴇᴄ’s Radix‑50. This was a 40‑character repertoire wherein three of its characters could be efficiently packed into a single 16‑bit words under two distinct encoding schemes. I used plenty of ᴅᴇᴄ ᴘᴅᴘ‑11s and Vaxen during my university days, and Rad‑50 was simply a fact of life, a reality that had to be accomodated.

like image 38
tchrist Avatar answered Oct 19 '22 07:10

tchrist