For example, consider SendInput
. The signature looks like this:
UINT WINAPI SendInput(
_In_ UINT nInputs,
_In_ LPINPUT pInputs,
_In_ int cbSize
);
The documentation says:
cbSize [in] Type: int The size, in bytes, of an INPUT structure. If cbSize is not the size of an INPUT structure, the function fails.
Since the function already uses the INPUT structure (and probably does something with its various fields), shouldn't it be aware of the structure's size beforehand?
The only reason I can imagine is that this is some kind of odd backwards-compatibility trick to make older library binaries compatible with newer header files that may have introduced new fields at the end of the struct.
It's a simple form of versioning for the structures.
A later version of the API could add more fields to the end of the structure, which would change its size. Programs written for the older version won't have set values in the newer fields and the cbSize parameter will reflect that. That API can check cbSize and know which version of the structure it really has, and, if necessary, provide default values for the new fields.
The alternative would be to define a new structure that has a lot in common with the old structure, and then make a new API that works a lot like the old one. That's a lot of code duplication, and it makes it harder for older programs to be recompiled using a newer SDK and continue to work.
Using a size field eliminates the need for a bunch of duplicate code. It was a common way to do things in C, but it's less type-safe.
But it's also a little dangerous. If the caller doesn't set the size field correctly or if the API implementation isn't very careful, this scheme could lead to access violations, reading of uninitialized fields, or writing past the end of a the structure.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With