Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does a Value type actually work in .net?

Tags:

.net

Somewhat academic question, but: How do value types like Int actually work?

I've used Reflector on mscorlib to find out how System.Int32 is implemented, and it's just a Struct that inherits from System.ValueType. I was looking for something among the lines of an Array of Bits holding the value, but I only found a field that's declared int - which means it's a circular reference?

I mean, I can write "int i = 14;", but the number 14 needs to be stored somewhere somehow, but I couldn't find the "Array of 32-Bits" or a Pointer or something.

Is this some magic that the compiler does, and are these magic types part of the specification? (Similar to how System.Attribute or System.Exception are "special" types)

Edit: If I declare my own struct, I add fields to it. Those fields are of a built-in type, for example int. So the CLR knows that I hold an int. But how does it know that an int is 32-Bits, signed? Is it simply that the Specification specifies certain base types and therefore makes them "magic", or is there a technical mechanism? Hypothetical example: If I would want to declare an Int36, that is an Integer with 36 Bits, could I create a type that works exactly like an Int32 (apart from the 4 extra bits ofc) by specifying "Okay, set aside 36 bits", or are the built-in primitives set in stone and I would have to somehow work around this (i.e. by using an Int64 and code that only sets the last 36 Bits)?

As said, all very academic and hypothetical, but I always wondered about that.

like image 870
Michael Stum Avatar asked Dec 21 '09 05:12

Michael Stum


2 Answers

Certain primitive types like integers are part of the CLI specification. For example, there are specific IL instructions like ldc.i4 for loading values of these types, and IL instructions such as add have specific knowledge of these types. (Your example of int i = 14 would get compiled to ldc.i4 14, with the 14 represented internally as part of the opcode within the compiled MSIL.)

For more info, see Partition IIa of the CLI spec, Section 7.2, "Built-in Types." (Can't find a link to the particular section, sorry.) The built-in types are: bool, char, object, string, float32, float64, int[8|16|32|64], unsigned int[8|1632|64], native int (IntPtr), native unsigned int, and typedref. The spec notes that they "have corresponding value types defined in the Base Class Library" which makes me think that Int32 is actually kind of a metadata wrapper around the "real" int32, which lives down at the VES level.

Other value types, like System.Decimal, System.Drawing.Point or any structs you define in your own code, are non-magical.

like image 72
itowlson Avatar answered Sep 27 '22 22:09

itowlson


Int32 is a compiler intrinsic type, meaning the compiler has special logic to deal with it. The actual implementation of Int32 in the framework is meaningless.

It's worth noting that Int16, Int32, UInt16, UInt32, Single, Double, etc. roughly correspond to the types which are native to the x86 instruction set (and others). Creating a Int36 type would likewise require using a native type as a foundation in pure assembly code.

like image 38
joemoe Avatar answered Sep 27 '22 22:09

joemoe