Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do c++ libraries often define their own primitive types?

I have started using OpenCL library lately and I've noticed that they are using their own integer types, like cl_int and cl_uint instead of int and unsigned int.

Why is that? Why don't they use the types that are by default in the language? Is it a good practice or are there practical reasons for this (i.e. more readable code)?

like image 970
Addy Avatar asked Jun 12 '15 11:06

Addy


2 Answers

The reason why this has been done in the past is portability. C and C++ do not make specific guarantees of the size of int, long and short, while library designers often require it.

A common solution is to define their own aliases for data types, and changing the definitions based on the specific platform, making sure that the type of the appropriate size gets used.

This problem originated in C, and has been addressed by introduction of stdint.h header file (renamed to cstdint in C++). Including this header lets you declare types int32_t, int16_t, etc. However, libraries developed prior to introduction of stdint.h and libraries that are required to compile on platforms lacking this header are using the old workaround.

like image 173
Sergey Kalinichenko Avatar answered Sep 30 '22 04:09

Sergey Kalinichenko


Often times it is so different configurations can be set up and propagate throughout the code.

For example I know that Bullet 3 defines btScalar, which (among other things) is basically:

#if defined(BT_USE_DOUBLE_PRECISION)
typedef double btScalar;
#else
typedef float btScalar;
#endif

So throughout the code you could use btScalar instead of float or double, and all instances of that could be switched by defining or undefining that macro.

There are other analogs of this that handle width of integer types, different character sets, and other platform dependent scenarios.

like image 20
Cory Kramer Avatar answered Sep 30 '22 05:09

Cory Kramer