I have started using OpenCL library lately and I've noticed that they are using their own integer types, like cl_int and cl_uint instead of int and unsigned int.
Why is that? Why don't they use the types that are by default in the language? Is it a good practice or are there practical reasons for this (i.e. more readable code)?
The reason why this has been done in the past is portability. C and C++ do not make specific guarantees of the size of int
, long
and short
, while library designers often require it.
A common solution is to define their own aliases for data types, and changing the definitions based on the specific platform, making sure that the type of the appropriate size gets used.
This problem originated in C, and has been addressed by introduction of stdint.h
header file (renamed to cstdint
in C++). Including this header lets you declare types int32_t
, int16_t
, etc. However, libraries developed prior to introduction of stdint.h
and libraries that are required to compile on platforms lacking this header are using the old workaround.
Often times it is so different configurations can be set up and propagate throughout the code.
For example I know that Bullet 3 defines btScalar
, which (among other things) is basically:
#if defined(BT_USE_DOUBLE_PRECISION)
typedef double btScalar;
#else
typedef float btScalar;
#endif
So throughout the code you could use btScalar
instead of float
or double
, and all instances of that could be switched by defining or undefining that macro.
There are other analogs of this that handle width of integer types, different character sets, and other platform dependent scenarios.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With