Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why certain implicit type conversions are safe on a machine and not on an other?? How can I prevent this cross platform issues?

I recently found a bug on my code that took me a few hours to debug.

the problem was in a function defined as:

unsigned int foo(unsigned int i){
   long int v[]={i-1,i,i+1} ;
       .
       .
       .
 return x ; // evaluated by the function but not essential how for this problem.
}

The definition of v didn't cause any issue on my development machine (ubuntu 12.04 32 bit, g++ compiler), where the unsigned int were implicitly converted to long int and as such the negative values were correctly handled.

On a different machine (ubuntu 12.04 64 bit, g++ compiler) however this operation was not safe. When i=0, v[0] was not set to -1, but to some weird big value (as it often happens when trying to make an unsigned int negative).

I could solve the issue casting the value of i to long int

long int v[]={(long int) i - 1, (long int) i, (long int) i + 1};

and everything worked fine (on both machines).

I can't figure out why the first works fine on a machine and doesn't work on the other.

Can you help me understanding this, so that I can avoid this or other issues in the future?

like image 788
lucacerone Avatar asked Sep 18 '12 17:09

lucacerone


1 Answers

For unsigned values, addition/subtraction is well-defined as modulo arithmetic, so 0U-1 will work out to something like std::numeric_limits<unsigned>::max().

When converting from unsigned to signed, if the destination type is large enough to hold all the values of the unsigned value then it simply does a straight data copy into the destination type. If the destination type is not large enough to hold all the unsigned values I believe that it's implementation defined (will try to find standard reference).

So when long is 64-bit (presumably the case on your 64-bit machine) the unsigned fits and is copied straight.

When long is 32-bits on the 32-bit machine, again it most likely just interprets the bit pattern as a signed value which is -1 in this case.

EDIT: The simplest way to avoid these problems is to avoid mixing signed and unsigned types. What does it mean to subtract one from a value whose concept doesn't allow for negative numbers? I'm going to argue that the function parameter should be a signed value in your example.

That said g++ (at least version 4.5) provides a handy -Wsign-conversion that detects this issue in your particular code.

like image 159
Mark B Avatar answered Sep 16 '22 16:09

Mark B