Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does Int32.MaxValue * Int32.MaxValue == 1?

Tags:

c#

int

math

I know, Int32.MaxValue * Int32.MaxValue will yield a number larger than Int32; But, shouldn't this statement raise some kind of an exception?

I ran across this when doing something like IF (X * Y > Z) where all are Int32. X and Y are sufficiently large enough, you get a bogus value from X * Y.

Why is this so and how to get around this? besides casting everything to Int64.

like image 655
Greg Balajewicz Avatar asked Jun 08 '10 20:06

Greg Balajewicz


2 Answers

Because int32 confines the results to 32bits.

So, if you have a look at the math at a byte level.

FFFFFFFF * FFFFFFFF = FFFFFFFE00000001

As you can see, the lowest 4 bytes = 1.

like image 182
Dan McGrath Avatar answered Sep 28 '22 10:09

Dan McGrath


By default, C# arithmetic is done in an unchecked context, meaning values will roll over.

You can use the checked and unchecked keywords to control that behavior.

like image 35
Bryan Watts Avatar answered Sep 28 '22 10:09

Bryan Watts