GCHandle.Alloc refuses to pin down an array with structs containing the "decimal" data type, but at the same time works fine with "double". What's the reason for that and can I work around it some way?
I know I can instead use unsafe/fixed to get at pointer to the array, but that won't work with generics. :-(
Full sample code to demonstrate the problem. The first Alloc works, but the second fails with a
Object contains non-primitive or non-blittable data.
public struct X1
{
public double X;
}
public struct X2
{
public decimal X;
}
Now try this:
var x1 = new[] {new X1 {X = 42}};
var handle1 = GCHandle.Alloc(x1, GCHandleType.Pinned); // Works
var x2 = new[] { new X2 { X = 42 } };
var handle2 = GCHandle.Alloc(x2, GCHandleType.Pinned); // Fails
var handle2 = GCHandle.Alloc(x2, GCHandleType.Pinned);
The runtime makes a hard assumption that you are going to call handle.AddrOfPinnedObject()
. Surely you are, very little reason to allocated a pinning handle otherwise. That returns an unmanaged pointer, an IntPtr
in C#. Distinct from a managed pointer, the kind you get with the fixed
keyword.
It furthermore assumes that you are going to pass this pointer to code that cares about the value size and representation. But otherwise being unable to inject a conversion, that code is going to party on the IntPtr directly. This requires the value type to be blittable, a geeky word that means that the bytes in the value can simply be interpreted or copied directly without any conversion and with decent odds that the code that uses the IntPtr is going to be able to recognize the value correctly.
That's a problem with some .NET types, the bool type is notorious for example. Just try this same code with bool instead of decimal and notice you'll get the exact same exception. System.Boolean is a very difficult interop type, there's no dominant standard that describes what it should look like. It is 4 bytes in the C language and the Winapi, 2 bytes in COM Automation, 1 byte in C++ and several other languages. In other words, the odds that the "other code" is going to have a shot at interpreting the 1 byte .NET value are rather slim. The unpredictable size is especially nasty, that throws off all subsequent members.
Much the same with System.Decimal, there is no widely adopted standard that nails down its internal format. Many languages have no support for it at all, notably C and C++ and if you write code in such a language then you need to use a library. Which might use IEEE 754-2008 decimals but that's a johnny-come-lately and suffers from the "too many standards" problem. At the time the CLI spec was written, IEEE 854-1987 standard was around but it was widely ignored. Still a problem today, there are very few processor designs around that support decimals, I only know of PowerPC.
Long story short, you need to create your own blittable type to store decimals. The .NET designers decided to use the COM Automation Currency type to implement System.Decimal, the dominant implementation back then thanks to Visual Basic. This is extremely unlikely to change, way too much code around that takes a dependency on the internal format, making this code the most likely to be compatible and fast:
public struct X2 {
private long nativeDecimal;
public decimal X {
get { return decimal.FromOACurrency(nativeDecimal); }
set { nativeDecimal = decimal.ToOACurrency(value); }
}
}
You could also consider uint[] and Decimal.Get/SetBits() but I think it is unlikely it will be faster, you'd have to try.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With