Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Decimal byte array constructor in Binaryformatter Serialization

I am facing a very nasty problem that I cannot identify.
I am running a very large business ASP.Net application containing many thousands of objects; It uses serialization/deserialization in memory with MemoryStream to clone the state of the application (insurance contracts) and pass it on to other modules. It worked fine for years. Now sometimes, not systematically, in the serialization it throws the exception

Decimal byte array constructor requires an array of length four containing valid decimal bytes.

Running the same application with the same data, 3 times out of 5 it works. I enabled all the CLR exceptions, Debug - Exceptions - CLR Exception - Enabled, so I guess that if a wrong initialization/assignment to decimal field occurs, the program should stop. It doesn't happen.
I tried to split serialization in more elementary objects but it's very difficult, to try to identify the field causing problem. From the working version in production and this one I passed from .Net 3.5 to .NET 4.0 and consistent changes to the UI part, not the business part, have been made. Patiently I'll go through all the changes.

It looks like old fashioned C problems when char *p is writing where it shouldn't, and only in the serialization process when it examines all the data the problem pops out.

Is something like this possible in the managed environment of .Net ? The application is huge but I cannot see abnormal memory growths. What could be a way to debug and track down the problem?

Below is part of the stacktrace

[ArgumentException: Decimal byte array constructor requires an array of length four containing valid decimal bytes.]
   System.Decimal.OnSerializing(StreamingContext ctx) +260

[SerializationException: Value was either too large or too small for a Decimal.]
   System.Decimal.OnSerializing(StreamingContext ctx) +6108865
   System.Runtime.Serialization.SerializationEvents.InvokeOnSerializing(Object obj, StreamingContext context) +341
   System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter, SerializationBinder binder) +448
   System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Write(WriteObjectInfo objectInfo, NameInfo memberNameInfo, NameInfo typeNameInfo) +969
   System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object graph, Header[] inHeaders, __BinaryWriter serWriter, Boolean fCheck) +1016
   System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph, Header[] headers, Boolean fCheck) +319
   System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph) +17
   Allianz.Framework.Helpers.BinaryUtilities.SerializeCompressObject(Object obj) in D:\SVN\SUV\branches\SUVKendo\DotNet\Framework\Allianz.Framework.Helpers\BinaryUtilities.cs:98
   Allianz.Framework.Session.State.BusinessLayer.BLState.SaveNewState(State state) in 

Sorry for the long story and the undetermined question, I'll really appreciate any help.

like image 255
Marco Furlan Avatar asked Aug 09 '13 07:08

Marco Furlan


1 Answers

That is.... very interesting; that is not actually reading or writing data at that time - it is calling the before-serialization callback, aka [OnSerializing], which here maps to decimal.OnSerializing. What that does is attempt to sanity-check the bits - but it looks like there is simply a bug in the BCL. Here's the implementation in 4.5 (cough "reflector" cough):

[OnSerializing]
private void OnSerializing(StreamingContext ctx)
{
    try
    {
        this.SetBits(GetBits(this));
    }
    catch (ArgumentException exception)
    {
        throw new SerializationException(Environment.GetResourceString("Overflow_Decimal"), exception);
    }
}

The GetBits gets the lo/mid/hi/flags array, so we can be pretty sure that the array passed to SetBits is non-null and the right length. So for that to fail, the part that must be failing is in SetBits, here:

private void SetBits(int[] bits)
{
    ....

    int num = bits[3];
    if (((num & 0x7f00ffff) == 0) && ((num & 0xff0000) <= 0x1c0000))
    {
        this.lo = bits[0];
        this.mid = bits[1];
        this.hi = bits[2];
        this.flags = num;
        return;
    }
    throw new ArgumentException(Environment.GetResourceString("Arg_DecBitCtor"));
}

Basically, if the if test passes we get in, assign the values, and exit successfully; if the if test fails, it ends up throwing an exception. bits[3] is the flags chunk, which holds the sign and scale, IIRC. So the question here is: how have you gotten hold of an invalid decimal with a broken flags chunk?

to quote from MSDN:

The fourth element of the returned array contains the scale factor and sign. It consists of the following parts: Bits 0 to 15, the lower word, are unused and must be zero. Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number. Bits 24 to 30 are unused and must be zero. Bit 31 contains the sign: 0 mean positive, and 1 means negative.

So to fail this test:

  • the exponent is invalid (outside 0-28)
  • the lower word is non-zero
  • the upper byte (excluding the MSB) is non-zero

Unfortunately, I have no magic way of finding which decimal is invalid...

The only ways I can think of looking here are:

  • scatter GetBits / new decimal(bits) throughout your code - perhaps as a void SanityCheck(this decimal) method (maybe with a [Conditional("DEBUG")] or something)
  • add [OnSerializing] methods into your main domain model, that log somewhere (console maybe) so you can see what object it was working on when it exploded
like image 55
Marc Gravell Avatar answered Sep 27 '22 22:09

Marc Gravell