Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are DWORD values commonly represented in Hexadecimal?

I am trying to understand why a DWORD value is often described in Hexadecimal on MSDN.

The reason why I am analyzing this is because I am trying to understand fundamentally why all these different number data types exist. A local mentor alluded to me that the creation of DWORD and other Microsoft types had something to do with the evolution of processors. This gives meaning and context to my understanding of these data types. I would like more context and background.

Either way, I could use some explanation or some resources on how to remember the difference between DWORD, unsigned integers, bytes, bits, WORD, etc.

In summary, my questions are: 1) Why are DWORDs represented in Hex? 2) Can you provide resources on the differences between numerical data types and why they were created?

like image 350
GreeneCreations Avatar asked Apr 13 '11 20:04

GreeneCreations


People also ask

Is DWORD a hexadecimal?

Dword stands for "double word" and Qword stands for "quad word". This makes sense. A word is 4 hexadecimal digits. A dword (8 hex digits) is twice the size of a word and a qword (16 hex digits) is four times the size of a word.

Why do we need hexadecimal?

Hexadecimal can be used to write large binary numbers in just a few digits. It makes life easier as it allows grouping of binary numbers which makes it easier to read, write and understand. It is more human-friendly, as humans are used to grouping together numbers and things for easier understanding.

What type is DWORD?

A dword, which is short for "double word," is a data type definition that is specific to Microsoft Windows. When defined in the file windows. h, a dword is an unsigned, 32-bit unit of data. It can contain an integer value in the range 0 through 4,294,967,295.


1 Answers

You have very interesing and tricky question.

In short there was two drivers that lead to existing of to competetive type families - DWORD-based and int-based:

1) Desire to have crosspltformity on the one hand and the stricktly size types on the another hand.

2) Peoples conservatism.

In any case to provide ful detailed answer to you question and good enough background of this field we must dig into the computers history. And start our story from the early days of computing.

For the first, there is such a notion as a machine word. Machine word is a stricktly sized chunk of binary data that is natural for processing in the particular processor. So, size of the machine word is hardly processor dependent and in general equal to size of the geneal internal processor registers. Usually it can be subdivided into the two equal parts that also can be accessed by processor as independent chuncks of data. For example, on the x86 processors the machine word size is 32 bits. Thats mean that all general registers (eax, ebx, ecx, edx, esi, edi, ebp, esp and eip) have the same size - 32 bits. But many of them can be accessed as part of the register. For example you can access eax as 32 bit data chunk, ax, as 16 bit data chunk or even al as 8 bit data chunk. But not that physically this is all one 32 bit register. I think that you can found very good background on that field on Wikipedia (http://en.wikipedia.org/wiki/Word_(computer_architecture)). In short, machine word is how much bit data chunk can be used as an integer operand for the single instruction. Even today different processor architectures have different machine word size.

Ok, we have some understanding of the computer word. It is a time to come back into the history of computing. The first Intel x86 processors that was popular had 16 bit word size. It came to the market at 1978. On that time the assembler was highly popular if not a primary programming language. As you know assembler is just a very thin wrapper under the native processor language. Due to this it is enterely hardware dependent. And when Intel push they new 8086 processor into the market, the first thing that they was needed to achive success is to push the assempler for the new processor into the market too. Nobody wants a processor that is nobody know how to program. And when Intel gave the names for the different data types in the assembler for the 8086 they make the evident chois and name 16-bit data chunk as a word, because the machine word of the 8086 have 16-bit size. The half of the machine word was called byte (8-bit) and two words used as one operand was called double word (32-bit). Intel used this terms in the processors manuals and in the assembler mnemonics (db, dw nd dd for statical allocation of byte, word and double word).

Years passed and 1985 Intel moved from 16-bit architecture to the 32-bit architecture with introduction of the 80386 processor. But at that time there was huge number of developers that was accustomed to that word is a 16-bit value. Beside that there was huge amount of soft was written with true belive that word is 16-bit. And many of the already written code rely to the fact that word is 16 bit. Due to this, beside the fact that the machine word size actually was changed, notation stayed the same, except the fact that new data type arrived to the assembler - quad word (64-bit), because the instruction that relies to the two machine words stayed the same, but the machine word was extended. In the same way the double quad word (128-bit) arrived now with 64-bit AMD64 architecture. As result we have

byte    =   8 bit
word    =  16 bit
dword   =  32 bit
qword   =  64 bit
dqword  = 128 bit

Note the main thing in that type family is that it is strongly sized types family. Because it is come from and it heavily used in assembler, that require data types with constant size. Note, years pass ony by one but the data types from this family continue have the same constant size, beside the fact the its name already haven't its original meaning.

On the another hand, in the same time year by year, the high level languages became more and more popular. And because that languges was developed with cross-platform application in the mind thay looked to the sizes of its internal data types from absolutly other point of view. If I correctly understand no one high level language doesn't clearly claim that the some of its internal data types have a fixed constant size that never will be changed in the future. Let't look at the C++ as at the example. The C++ standart tells that:

"The fundamental storage unit in the C++ memory model is the byte. A byte is at 
least large enough to contain any member of the basic execution character set and 
is composed of a contiguous sequence of bits, the number of which is implementa-
tion-defined. The least significant bit is called the low-order bit; the most 
significant bit is called the high-order bit. The memory available to a C++ program
consists of one or more sequences of contiguous bytes. Every byte has a unique 
address."

So, we can see surprising information - in C++ even byte haven't any constant size. So even if we accustomed to think have size - 8 bit, according to C++ can be not only 8 but also 9, 10, 11, 12 etc. bits in size. And maybe even 7 bit.

"There are five signed integer types: “signed char”, “short int”, “int”, and 
“long int”., and “long long int”. In this list, each type provides at least as 
much storage as those preceding it in the list. Plain ints have the natural size
suggested by the architecture of the execution environment; the other signed 
integer types are provided to meet special needs."

That cite describe two main claims:

1) sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

2) Plain ints have the natural size suggested by the architecture of the execution environment. That is mean that int must have the machine word size of the target processor architecture.

You can goes through all C++ standard text but you will fail to find something like "size of int is 4 bytes" or "length of long is 64 bit". Size of particular integer C++ types can change with moving from one processor architecture to another, and with moving from one compiler to another. But even when you write program in c++ you will periodically faced with requirenment to use data types with well known constant size.

At least earlier compiler developers followed that standard claims. But now we can see that people conservatism comes into the play one more time. People used to think that int is 32-bit and can store values from the range from –2,147,483,648 to 2,147,483,647. Earlier when industry came through the border between 16-bit and 32-bit architectures. The second claim was strictly enforced. And when you used C++ compiler to create 16-bit program, compiler used int with 16-bit size that is "natural size" for 16-bit processors and, in contrast, when you used another C++ compiler to create 32-bit program but from the same source code, compiler used int with 32-bit size that is "natural size" for 32-bit processors. Nowadays, if you will look, for example, to the Microsoft C++ compiler you will found that it will use 32-bit int regardless of the target processor architecture (32-bit or 64-bit) just because people used to think that int is 32-bit!

As summury, we can see that thare are two data types families - dword-based and int-based. Motivation for the second one is obvious - cross-platform application development. Motivation for the fisrt one is all cases when the taking into the accaunt the sizes of variables have sense. For example, among other we can mention next cases:

1) You need to have some value in predifined well-known range and you need use it class or in another data structure that will populate into huge number of instances in run-time. In that case if you will use int-based types to store that value it will have the drawback in huge memory overhead on some architectures and potentially can broke the logic on another one. For example you need to manipulate values in the range from 0 to 1000000. If you will use int to store it, you program will correctly behave if int will be 32-bit, will have 4-byte memory overhead per each value instance if int will be 64-bit, and won't correctly work if int will be 16-bit.

2) Data involved into the nextworking. To have the ability to correctly handle your networking protocol on different PCs you will need to specify it in plain in size-based format, that will describe all packets and header bit by bit. Your network communication will be completely broken if on one PC you protocol header will be 20 byte length with 32-bit, and on another PC it will be 28 byte length with 64-bit int.

3) Your program need to store value used for some special processor instructions, or your program will communicate with modules or code chunks written in assembler.

4) You need store values that will be used to communicate with devices. Each device have its own specificetion that describes what sort of input device require as input and in what form it will provide output. If device require 16-bit value as input it must receive equally 16-bit value regardless of the int size and even regardless of machine word size used by processor on the system where device is installed.

5) Your algoritm relies to the integer overflow logic. For example you have array of 2^16 entries, and you want infenitely and sequentely goes through it and refresh entries values. If you will use 16-bit int your program will works perfectly, but wimmediatelly you moving to the 32-bit int usage you will have out of range array index access.

Due to this Microsoft use both families of data types. Int-based types in case where actual data size haven't big importance, and DWORD-based in the cases it have. And even in that case Microsoft define both as macroses, to provide the ability quickly and easy enough adopt virtual type system used by Microsoft to the particular processor architecture and/or compiler by assigning to it correct C++ equivalent.

I hope that I have covered the question about the origin of the data types and their differences quite well.

So, we can switch to the seqond question about why hexademical digit is used to denote DWORD-based data types values. There are actually few reasons:

1) If we use stricktly-sized binary data types it will expectable enough that we can want to look on them in the binary form.

2) It is much easy to understand bit masks values when they encoded in binary form. Agree that it is much easier to understang what bit is set and what bit is reset if value in the next form

1100010001011001

then if it will be encoded in next form

50265

3) Data encoded in the binary form and described one DWORD-based value have the constant length, when the same data encoded in teh decimal form will have variable length. Note that even when the small number is encoded in binary form, the full value description is provided

0x00000100

instead of

0x100

This property of the binary encoding is very attractive in the case when the analysis of the huge amount of binary data is required. For example, hex editor or analysis of the plain memory used by your program in debugger when your breakpoint was hit. Agree that it is much more comfortableto look on the neat columns of values that to the heap of weakly aligned variable size values.

So, we decided that we want to use binary encoding. We have three choices: use plain binary encoding, use octal encoding and use hexadecimal encoding. Peple prefere to use hexademical encoding because it most shortest from the set of avaliable encodings. Just compare

10010001101000101011001111000

and

0x1234568

Can you quickly found the numbers of bits that is set in the next value?

00000000100000000000000000000

and in next?

0x00100000

In the second case you can quickly divide the number in four separated bytes

0x00 0x10 0x00 0x00
   3    2    1    0

in each of which the first digit denote 4 most significant bits and the second one denote another 4 least significant bits. After you will spent some time on working with hex values you will remember the plain bit analog of each hexadecimal digit and will replace one by another in the mind without any problems:

0 - 0000  4 - 0100  8 - 1000  C - 1100
1 - 0001  5 - 0101  9 - 1001  D - 1101
2 - 0010  6 - 0110  A - 1010  E - 1110
3 - 0011  7 - 0111  B - 1011  F - 1111

So, we need only second or two to found that we have bit number 20 is set!

People use hex because it is most short, confortable to undestand and use form of binary data encoding.

like image 84
ZarathustrA Avatar answered Oct 12 '22 23:10

ZarathustrA