What are the purpose of api specific typedefs such as GLsizei GLint GLvoid?
I see this everywhere in c and c++ code. Basic types are often typdefed with the libraries prefix/suffix. What's the reasoning behind this? Is this good practice? Should my programs be doing something similar themselves?
At first glance it seems to make the code a little bit less readable. You have to take an instant to translate GLint into int in your head, and that's an easy example.
Something like UINT makes more since to me, at least this is shortening unsigned int into four lettters.
It's not about shortening the names, but about portability. Different platforms will need to typedef those things differently.
In Std-C, long
may be 32 or 64 bits, depending on your compiler/target, so it can't be safely assumed to be a certain size. A library author will thus typedef his own type, guaranteeing a certain size, with the knowledge of the target platform.
E.g.
#ifdef _WIN32
typedef __int64 INT64; // long will not be 64 bit on Windows/VC.
#elif __GNU_C__
typedef long INT64; // gcc typically uses 64 bit longs.
#elif // ... other platforms ...
...
#endif
And if compilers change type properties in future versions, the types can be edited in one place.
In the past you also had a typical case where int
might be 16 or 32 bits in size, so you couldn't simply use the raw int
type in code where you needed a DWORD
-sized argument.
Hence why you have things like LPARAM
and WPARAM
.
It's also used as a form of abstraction. Which is why you see typedefs like
typedef int Handle;
Because while it's an int
now, the library author reserves the ability to change it later down the track to anything else, say a void *
, or any other type they deem necessary.
But the client code doesn't need to know it's an int
specifically, since that's just what it currently happens to be. All the client needs to know is to pass it along to functions accepting a Handle
type.
Typedefs also allow configuration at compile time. E.g. some libraries may have a Real
type for real numbers. It could be defined in a way such as
#ifdef USE_DOUBLE_PREC
typedef double Real;
#else
typedef float Real;
#endif
And the user of the library can optionally set /DUSE_DOUBLE_PREC
when compiling to get double precision float support, but the important thing is that no library code needs to change for this to work, since it's been abstracted.
For the most part, when a library defines basic types with no guaranteed properties beyond similarly named types in the standard (think INT
, GLint
, gint
, LPSTR
, u32
, u_int
, etc.), the purpose is either:
Based on one of my favorite principles "Never attribute to malice what can adequately be explained by stupidity", you might go with #2, but it's really up to you.
Personally whenever coding to such an API, I throw out the library-specific types and use the correct natural types (int
, char *
, uint32_t
, etc.) in their place. Then it's easy to adapt my code to use without the library should I ever need to, and the code is more readable to people not familiar with the library.
It gives the ability to change the typedef in one place rather than searching-and-replacing all over the codebase if the need somehow arose to change the underlying type. However, I also find it more "noise" than anything and have rarely seen it ever needed in a real-life scenario.
The only example I've seen a decent use is for floats if you happen to be working in games and might have need to have your game ported from/to the Nintendo DS since the DS natively works with fixed-point numbers. In that case, you have a special typedef for the float so that it really is typedef'ed to a float on most platforms and to a special fixed-point class on the DS.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With