Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do we sometimes use hexadecimal format over decimal?

I have read the explanation about

int a = 0x1; //hexadecimal format

But still, I can't find the reason why a programmer should use 0x1, 0x2 instead of plain integer 1, or 2...

Can someone please explain this?

Thank you.

like image 378
felixwcf Avatar asked Sep 03 '14 03:09

felixwcf


3 Answers

There are a number of reasons one would prefer a hexadecimal representation over a decimal one. The most common in computing are bit fields. A few people already mentioned color codes, e.g.:

red   = 0xFF0000 // 16711680 in decimal
green = 0x00FF00 // 65280 in decimal
blue  = 0x0000FF // 255 in decimal

Note that this representation of color is not only more intuitive than trying to figure out what color a random integer like 213545 might be, but it also takes up less space than a 3-tuple like (125, 255, 0) representing (R,G,B). The hex representation is an easy way to abstract the same idea as the 3-tuple with a lot less overhead.

Keep in mind that bit fields have many applications, consider a spacetime bit field:

  Represents x coordinate
  |  Represents y coordinate
  |  |  Represents z coordinate
  |  |  |  Represents t
  |  |  |  |
  1A 2B 3C 4D

Another reason why someone might use hex values is because it's sometimes easier to remember (and represent) a binary digit as two symbols rather than three. Consider the x86 Instruction Reference. I know off the top of my head that 0xC3 is ret; I find it easier to memorize hex numbers 00-FF rather than decimals 0-255 (I looked it up and ret ends up being 195), but your mileage may vary. For example, this is some code from a project I've been working on:

public class x64OpcodeMapping {
    public static final Object[][] map = new Object[][] {
            { "ret",   0xC3 },
            { "iret",  0xCF },
            { "iretd", 0xCF },
            { "iretq", 0xCF },
            { "nop"  , 0x90 },
            { "inc"  , 0xFF },
    };
}

There are clear benefits (not to mention consistency) when using hexadecimal notation here. Finally, and as Obicere mentions, hex codes are often used as error codes. Sometimes they are grouped in a bit field-like manner. For example:

0x0X = fatal errors
0x1X = user errors
0x2X = transaction errors
// ...
// X is a wildcard

Under such a schema, a minimal error list would look like:

0x00 = reserved
0x01 = hash mismatch
0x02 = broken pipe
0x10 = user not found
0x11 = user password invalid
0x20 = payment method invalid
// ...

Note that this also allows us to add new errors under 0x0X if such a need would arise. This answer ended up being a lot longer than I expected, but hopefully I shed some light.

like image 171
David Titarenco Avatar answered Oct 06 '22 01:10

David Titarenco


One reason that I've used myself is that it helps organizationally/visually (i.e. for humans when laying them out in the code) when it comes to flags.

i.e.

int a = 0x1;
int b = 0x2;
int c = 0x4;
int d = 0x8;
int e = 0x10;

and so on. Those can then be bitwise-OR'ed together more neatly.

For example, all of the above bitwise-OR'ed toegether are 0X1F, which is 11111 in binary, or separate binary fields.

Then, if I want to remove a flag I bitwise-XOR it out.

i.e.

0x1F XOR 0x8 = 10111

like image 32
khampson Avatar answered Oct 06 '22 01:10

khampson


No difference actually. I think it's based on consistence consideration. For example, you want to specify some colors, like 0xffffff, 0xab32, 0x13 and 0x1, which are consistent and easy to read.

like image 33
Landys Avatar answered Oct 06 '22 00:10

Landys