The following function (extracted from a .cpp file) gives two different results (that is, the output buffer int_image is different) if executed on a PC with Visual Studio (cpu Intel i7 running Windows 7) or on my Android phone (P880). The two input buffers im1 and im2, of type int8 (a synonymous of char), are exactly the same (checked), as well as the parameters w and h. I can't understand why this is happening:
void Compute(int8* im1,
int8* im2,
int w,
int h,
int* int_image)
{
int index = 0;
int sum;
for(int i = 0; i<h; i++)
{
// reset this column sum
sum = 0;
for(int j = 0; j<w; j++)
{
int pn;
int8 v1, v2;
v1 = im1[index];
v2 = im2[index];
pn = v1*v2;
//pn = ((int)im1[index]) * ((int)im2[index]);
sum += pn;
if (i==0)
int_image[index] = sum;
else
int_image[index] = int_image[index - w] + sum;
index++;
}
}
}
Note. The size of char images im1 and im2 can be such that an integer overflow can happen (but i think that this kind of situation is handled equally by the two compilers, but at this point i am not so sure).
Found the source of error. I defined int8 as char, believing that by default it was a signed char. Instead, it is unsigned on gcc, while signed on Visual C++. I do not know what the standard says, but i suggest to all programmers out there to use explicitly signed char and unsigned char when defining your own macro for int8 and uint8.
Interesting discussion about this on stackoverflow.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With