Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Motivation for using size_t uint32 uint64 etc

Tags:

c

When I reading some code, for integer, they use bunch of different type such as size_t, uint32, uint64 etc. What is the motivation or purpose to do this? Why not just use int? Related to platform-cross? Or low-level relevant.

Sometimes, the code make sense to me because they just want 32 bit int or something. But, what is size_t? Please help me make this clear.

like image 767
Vent Nos Avatar asked Aug 26 '11 00:08

Vent Nos


3 Answers

These are for platform-independence.

size_t is, by definition, the type returned by sizeof. It is large enough to represent the largest object on the target system.

Not so many years ago, 32 bits would have been enough for any platform. 64 bits is enough today. But who knows how many bits will be needed 5, 10, or 50 years from now?

By writing your code not to care -- i.e., always use size_t when you mean "size of an object" -- you can write code that will actually compile and run 5, 10, or 50 years from now. Or at least have a fighting chance.

Use the types to say what you mean. If for some reason you require a specific number of bits (probably only when dealing with an externally-defined format), use a size-specific type. If you want something that is "the natural word size of the machine" -- i.e., fast -- use int.

If you are dealing with a programmatic interface like sizeof or strlen, use the data type appropriate for that interface, like size_t.

And never try to assign one type to another unless it is large enough to hold the value by definition.

like image 113
Nemo Avatar answered Sep 28 '22 20:09

Nemo


The motivation to use them is because you can't rely on int, short or long to have any particular size - a mistake made by too many programmers far too many times in the past. If you look not too far back in history, there was a transition from 16 bit to 32 bit processors, which broke lots of code because people had wrongly relied on int being 16 bits. The same mistake was made thereafter when people relied on int to be 32 bits, and still do so even to this day.

Not to mention the terms int, short and long have been truly nuked by language designers who all decide to make them mean something different. A Java programmer reading some C will naively expect long to mean 64 bits. These terms are truly meaningless - they don't specify anything about a type, and I facepalm every time I see a new language released that still uses the terms.

The standard int types were a necessity so you can use the type you want to use. They should've deprecated int, short and long decades ago.

like image 27
Mark H Avatar answered Sep 28 '22 21:09

Mark H


For info on size_t, see the Stack Overflow question: What is size_t in C?

You're right for uint32 and uint64 that they're just being specific about the number of bits that they would like, and that the compiler should interpret them as unsigned.

like image 43
sblom Avatar answered Sep 28 '22 21:09

sblom