Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are the standard datatypes not used in Win32 API? [duplicate]

Tags:

c++

c

types

winapi

I have been learning Visual C++ Win32 programming for some time now. Why are there the datatypes like DWORD, WCHAR, UINT etc. used instead of, say, unsigned long, char, unsigned int and so on?

I have to remember when to use WCHAR instead of const char *, and it is really annoying me. Why aren't the standard datatypes used in the first place? Will it help if I memorize Win32 equivalents and use these for my own variables as well?

like image 205
Abdullah Leghari Avatar asked Feb 20 '13 16:02

Abdullah Leghari


3 Answers

Yes, you should use the correct data-type for the arguments for functions, or you are likely to find yourself with trouble.

And the reason that these types are defined the way they are, rather than using int, char and so on is that it removes the "whatever the compiler thinks an int should be sized as" from the interface of the OS. Which is a very good thing, because if you use compiler A, or compiler B, or compiler C, they will all use the same types - only the library interface header file needs to do the right thing defining the types.

By defining types that are not standard types, it's easy to change int from 16 to 32 bit, for example. The first C/C++ compilers for Windows were using 16-bit integers. It was only in the mid to late 1990's that Windows got a 32-bit API, and up until that point, you were using int that was 16-bit. Imagine that you have a well-working program that uses several hundred int variables, and all of a sudden, you have to change ALL of those variables to something else... Wouldn't be very nice, right - especially as SOME of those variables DON'T need changing, because moving to a 32-bit int for some of your code won't make any difference, so no point in changing those bits.

It should be noted that WCHAR is NOT the same as const char - WCHAR is a "wide char" so wchar_t is the comparable type.

So, basically, the "define our own type" is a way to guarantee that it's possible to change the underlying compiler architecture, without having to change (much of the) source code. All larger projects that do machine-dependant coding does this sort of thing.

like image 123
Mats Petersson Avatar answered Oct 09 '22 22:10

Mats Petersson


The sizes and other characteristics of the built-in types such as int and long can vary from one compiler to another, usually depending on the underlying architecture of the system on which the code is running.

For example, on the 16-bit systems on which Windows was originally implemented, int was just 16 bits. On more modern systems, int is 32 bits.

Microsoft gets to define types like DWORD so that their sizes remain the same across different versions of their compiler, or of other compilers used to compile Windows code.

And the names are intended to reflect concepts on the underlying system, as defined by Microsoft. A DWORD is a "double word" (which, if I recall correctly, is 32 bits on Windows, even though a machine "word" is probably 32 or even 64 bits on modern systems).

It might have been better to use the fixed-width types defined in <stdint.h>, such as uint16_t and uint32_t -- but those were only introduced to the C language by the 1999 ISO C standard (which Microsoft's compiler doesn't fully support even today).

If you're writing code that interacts with the Win32 API, you should definitely use the types defined by that API. For code that doesn't interact with Win32, use whatever types you like, or whatever types are suggested by the interface you're using.

like image 25
Keith Thompson Avatar answered Oct 09 '22 22:10

Keith Thompson


I think that it is a historical accident.

My theory is that the original Windows developers knew that the standard C type sizes depend on the compiler, that is, one compiler may have 16-bit integer and another a 32-bit integer. So they decided to make the Window API portable between different compilers using a series of typedefs: DWORD is a 32 bit unsigned integer, no matter what compiler/architecture you are using. Naturally, nowadays you will use uint32_t from <stdint.h>, but this wasn't available at that time.

Then, with the UNICODE thing, they got the TCHAR vs. CHAR vs. WCHAR issue, but that's another story.

And, then it grew out of control and you get such nice things as typedef void VOID, *PVOID; that are utterly nonsense.

like image 10
rodrigo Avatar answered Oct 09 '22 23:10

rodrigo