Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Opposite behavior of Marshal.SizeOf and sizeof operator for boolean and char data types in C#

I was comparing Marshal.SizeOf API with sizeof operator in C#. Their outputs for char and bool data types are little surprising. Here are the results:

For Boolean:

Marshal.SizeOf = 4

sizeof = 1

For char:

Marshal.SizeOf = 1

sizeof = 2

On this link from MSDN I got following text:

For all other types, including structs, the sizeof operator can be used only in unsafe code blocks. Although you can use the Marshal.SizeOf method, the value returned by this method is not always the same as the value returned by sizeof. Marshal.SizeOf returns the size after the type has been marshaled, whereas sizeof returns the size as it has been allocated by the common language runtime, including any padding.

I do not know a lot about technicalities of Marshaling but it has something to do with Run-time heuristics when things change. Going by that logic for bool the size changes from 1 to 4. But for char (from 2 to 1) it is just the reverse which is a boomerang for me. I thought for char also it should also increase the way it happened for bool. Can some one help me understand these conflicting behaviors?

like image 772
RBT Avatar asked Mar 11 '23 10:03

RBT


1 Answers

Sorry, you really do have to consider the technicalities to make sense of these choices. The target language for pinvoke is the C language, a very old language by modern standards with a lot of history and used in a lot of different machine architectures. It makes very few assumptions about the size of a type, the notion of a byte does not exist. Which made the language very easy to port to the kind of machines that were common back when C was invented and the unusual architectures used in super-computers and digital signal processors.

C did not originally have a bool type. Logical expressions instead use int where a value of 0 represents false and any other value represents true. Also carried forward into the winapi, it does use a BOOL type which is an alias for int. So 4 was the logical choice. But not a universal choice and you have to watch out, many C++ implementations use a single byte, COM Automation chose two bytes.

C does have a char type, the only guarantee is that it has at least 8 bits. Whether it is signed or unsigned is unspecified, most implementations today use signed. Support for an 8-bit byte is universal today on the kind of architectures that can execute managed code so char is always 8 bits in practice. So 1 was the logical choice.

That doesn't make you happy, nobody is happy about it, you can't support text written in an arbitrary language with an 8-bit character type. Unicode came about to solve the disaster with the many possible 8-bit encodings that were in use but it did not have much of an affect on the C and C++ languages. Their committees did add wchar_t (wide character) to the standard but in keeping with old practices they did not nail down its size. Which made it useless, forcing C++ to later add char16_t and char32_t. It is however always 16 bits in compilers that target Windows since that is the operating system's choice for characters (aka WCHAR). It is not in the various Unix flavors, they favor utf8.

That works well in C# too, you are not stuck with 1 byte characters. Every single type in the .NET framework has an implicit [StructLayout] attribute with a CharSet property. The default is CharSet.Ansi, matching the C language default. You can however easily apply your own and pick CharSet.Unicode. You now get two bytes per character, using the utf16 encoding, the string is copied as-is since .NET also uses utf16. Making sure that the native code expects strings in that encoding is however up to you.

like image 159
Hans Passant Avatar answered Apr 09 '23 06:04

Hans Passant