Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is this definition on an octal byte correct?

Tags:

binary

byte

octal

My instructor stated that "an octal byte consists of 6 bits". I am having difficulty understanding why this is, as an octal digit consists of 3 binary bits. I also do not understand the significance of an octal byte being defined as '6 bits' as opposed to some other number.

Can anyone explain why this is, if it is in fact true, or point me to a useful explanation?

like image 679
user1092697 Avatar asked Feb 23 '23 04:02

user1092697


1 Answers

This is all speculation and guesswork, since none of this is in any way standard terminology.

An 8-bit byte can be written as two digits of hexadecimals, because each digit expresses 4 bits. The largest such byte value is 0xFF.

By analogy, two digits of octals can express 2 × 3 = 6 bits. Its largest value is 077. So if you like you can call a pair of two octals an "octal byte", but only if you will also call an 8-bit byte a "hexadecimal byte".

In my personal opinion neither notion is helpful or useful, and you'd be best of just to say how many bits your byte has.

like image 143
Kerrek SB Avatar answered Feb 24 '23 17:02

Kerrek SB