Could someone please exactly why the following typedef
s/#define
s have been defined? What value do they have, compared to the originals?
typedef char CHAR;
#define CONST const
typedef float FLOAT;
typedef unsigned __int64 DWORD64; //A 64-bit "double"-word?!
typedef ULONGLONG DWORDLONG; //What's the difference?
typedef ULONG_PTR DWORD_PTR; //What's the difference?
typedef long LONG_PTR; //Wasn't INT_PTR enough?
typedef signed int LONG32; //Why not "signed long"?
typedef unsigned int UINT; //Wait.. UINT is "int", "LONG" is also int?
typedef unsigned long ULONG; //ULONG is "long", but LONG32 is "int"? what?
typedef void *PVOID; //Why not just say void*?
typedef void *LPVOID; //What?!
typedef ULONG_PTR SIZE_T; //Why not just size_t?
And, best of all:
#define VOID void //Assuming this is useful (?), why not typedef?
What's the reasoning behind these? Is it some sort of abstraction I'm not understanding?
Edit:
For those people mentioning compiler cross-compatilibity:
My question is not about why they didn't use unsigned long long
instead of, say, DWORD64
. My question is about why would anyone use DWORD64
instead of ULONG64
(or vice-versa)? Aren't both of those typedef
ed to be 64 bits wide?
Or, as another example: Even in a "hypothetical" compiler that was meant to deceive us in every respect, what would be the difference between ULONG_PTR
and UINT_PTR
and DWORD_PTR
? Aren't those all abstract data types just meaning the same thing -- SIZE_T
?
However, I am asking why they used ULONGLONG
instead of long long
-- is there any potential difference in meaning, covered by neither long long
nor DWORDLONG
?
HWND data types are "Handles to a Window", and are used to keep track of the various objects that appear on the screen. To communicate with a particular window, you need to have a copy of the window's handle. HWND variables are usually prefixed with the letters "hwnd", just so the programmer knows they are important.
Description. Unsigned long variables are extended size variables for number storage, and store 32 bits (4 bytes). Unlike standard longs unsigned longs won't store negative numbers, making their range from 0 to 4,294,967,295 (2^32 - 1).
2.2. A DWORD is a 32-bit unsigned integer (range: 0 through 4294967295 decimal).
A dword, which is short for "double word," is a data type definition that is specific to Microsoft Windows. When defined in the file windows. h, a dword is an unsigned, 32-bit unit of data. It can contain an integer value in the range 0 through 4,294,967,295.
Most of these redundant names exist primarily for two reasons:
typedef char CHAR;
The signedness of char
can vary across platforms and compilers, so that's one reason. The original developers might have also kept this open for future changes in character encodings, but of course this is no longer relevant since we use TCHAR
now for that purpose.
typedef unsigned __int64 DWORD64; //A 64-bit "double"-word?!
During the move to 64-bit, they probably discovered that some of their DWORD
arguments really needed to be 64 bits long, and they probably renamed it DWORD64
so that existing users of those APIs weren't confused.
typedef void *PVOID; //Why not just say void*?
typedef void *LPVOID; //What?!
This one dates back to the 16-bit days, when there were regular "near" pointers which were 16-bit and "far" pointers that were 32-bit. The L
prefix on types stands for "long" or "far", which is meaningless now, but back in those days, these were probably defined like this:
typedef void near *PVOID;
typedef void far *LPVOID;
Update: As for FLOAT
, UINT
and ULONG
, these are just examples of "more abstraction is good", in view of future changes. Keep in mind that Windows also runs on platforms other than x86 -- you could think of an architecture where floating-point numbers were represented in a non-standard format and the API functions were optimized to make use of this representation. This could then be in conflict with C's float
data type.
When the Windows API header files were first built 25 years ago, an int
was 16 bits and a long
was 32 bits. The header files have evolved over time to reflect changes in compilers and in hardware.
Also, Microsoft C++ isn't the only C++ compiler out there that works with the Windows header files. When Microsoft added the size_t
keyword, not all compilers supported it. But they could easily create a macro, SIZE_T
, to express it.
Also, there are (or were) automated tools that convert the API header files from C/C++ to other languages. Many of those tools were originally written to work with the current (at the time) header definitions. If Microsoft were to just change the header files to streamline them as you suggest, many of those tools would stop working.
Basically, the header files map Windows types to a least common denominator so that multiple tools can work with them. It does seem to be something of a mess at times, and I suspect that if Microsoft were willing to throw out any semblance of backward compatibility, they could reduce a large part of the mess. But doing so would break a lot of tools (not to mention a lot of documentation).
So, yes, the Windows header files are sometimes a mess. That's the price we pay for evolution, backward compatibility, and the ability to work with multiple languages.
Additional info:
I'll agree that at first glance all those definitions seem crazy. But as one who has seen the Windows header files evolve over time, I understand how they came about. Most of those definitions made perfect sense when they were introduced, even if now they look crazy. As for the specific case ULONGLONG
and DWORD64
, I imagine that they were added for consistency, as the old header files had ULONG
and DWORD
, so programmers would expect the other two. As for why ULONG
and DWORD
were both defined when they are the same thing, I can think of several possibilities, two of which are:
ULONG
and another used DWORD
, and when header files were consolidated they just kept both rather than breaking code by converting to one or the other.ULONG
than DWORD
. ULONG
implies an integer type that you can do math on, whereas DWORD
just implies a generic 32-bit value of some sort, typically something that is a key, handle, or other value that you wouldn't want to modify.Your initial question was whether there was some reasoning behind the seemingly crazy definitions, or if there's an abstraction you're not missing. The simple answer is that the definitions evolved, with the changes making sense at the time. There's no particular abstraction, although the intent is that if you write your code to use the types that are defined in the headers, then you should be able to port your code from 32-bit to 64-bit without trouble. That is, DWORD
will be the same in both environments. But if you use DWORD
for a return value when the API says that the return value is HANDLE
, you're going to have trouble.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With