Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

native float type usage in CLR

Tags:

.net

cil

I have found that CIL compiller allows type native float. However, CLR doesn't allow it. Has it any uses? What is its size? Is there a corresponding .NET type? I tried to implement it as a pseudo-primitive type:

.class public sequential ansi serializable sealed beforefieldinit NativeFloat
  extends System.ValueType
{
  .field assembly native float m_value
}

However, this type isn't supported by CLR. Thank you for your help.

Edit: If you're insterested, its CorElementType is 26 (0x1a, R).

like image 352
IS4 Avatar asked Dec 19 '12 21:12

IS4


2 Answers

From the CIL ECMA spec, I.12.1.1 Native size: native int, native unsigned int, O and &:

The native-size types (native int, native unsigned int, O, and &) are a mechanism in the CLI for deferring the choice of a value’s size. These data types exist as CIL types; however, the CLI maps each to the native size for a specific processor. (For example, data type I would map to int32 on a Pentium processor, but to int64 on an IA64 processor.) So, the choice of size is deferred until JIT compilation or runtime, when the CLI has been initialized and the architecture is known. This implies that field and stack frame offsets are also not known at compile time.

Now, having said that, native float (as opposed to native int) is not mentioned a single time in the ECMA spec. The only evidence I can find of it is in some open source CIL assemblers, where they throw an exception stating that they cannot generate an opcode for native float.

If Microsoft's CIL compiler does in fact accept this type, I would imagine that this was a feature Microsoft intended to implement but did not eventually end up putting into MSIL (CIL's predecessor). Additionally, if the assembler does in fact produce an opcode instead of an error message, I could conceivably imagine (though again, this is speculation) that there may be variant's of Microsoft's CLR (perhaps .NET Micro Framework or a particular version of Silverlight, or something) that supports the opcode.

Also note that in the spec above, the CLI is mentioned. The CLR is merely Microsoft's implementation of the CLI.

The ECMA spec does mention a native floating point type, but it isn't native float:

F, a floating point value (float32, float64, or other representation supported by the underlying hardware)

like image 126
David Pfeffer Avatar answered Oct 04 '22 20:10

David Pfeffer


The CLI spec, Ecma 335, defines three floating point types. Float32, float64 and F. The first two are the nominal types, the 3rd is the representational type, the "native float" type in your IL.

Section I.12.1.3, "Handling of floating-point data types" gives the rationale:

Storage locations for floating-point numbers (statics, array elements, and fields of classes) are of fixed size. The supported storage sizes are float32 and float64. Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either float32or float64, but its value can be represented internally with additional range and/or precision. The size of the internal floating-point representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented.

This doesn't actually exist in current jitter implementations, arguments and local variables are in fact either float32 or float64. But there is some precedent for it, and the probable reason they considered this in the first place, the internal FPU registers in Intel processors are 80 bits. That was a design decision made many moons ago when Intel designed the 8087 co-processor.

The idea sounds very good on paper, it was to allow intermediate calculation results to be stored with more precision so the end-result of a calculation could be more accurate. It was however without a doubt Intel's billion dollar mistake, the FPU is impossible to optimize for and still allow for consistent floating point results. At issue is that the internal FPU registers are a limited resource, there are only 8 of them. It is also implemented as a stack, very awkward to deal with. If the calculation gets involved then inevitably an intermediary result needs to be spilled to memory. Truncating the 80 bits value to 64 bits. Which makes small changes to code produce large differences in the calculation result if the calculation is apt to lose a lot of significant digits. Or in general upset a programmer because the 16th digit isn't the same.

Well, big mistake and the source of an enormous number of questions at SO. The idea was scrapped when Intel implemented the next generation floating point hardware, the XMM and YMM registers are 64-bit. True registers, not a stack. The x64 jitter uses them. Making your program running in 64-bit mode produce different results from when it runs in 32-bit mode. It will take another ten years before that stops hurting.

like image 45
Hans Passant Avatar answered Oct 04 '22 19:10

Hans Passant