Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can integer division ever over/underflow, assuming the denominator <>0? [duplicate]

Can (true) integer division ever over/underflow (with the assumption that the denominator is not 0)?

Since the value is always either staying the same or getting smaller (since, in integer division, the smallest absolute non-zero denominator is 1, therefore the result can never be bigger than the numerator), I would assume not.

I'm asking more or less in the context of C/C++ standards, and I'm interested in how the various modern CPU architectures might handle integer division differently when it comes to defined/undefined behavior.

like image 674
Qix - MONICA WAS MISTREATED Avatar asked Nov 15 '19 17:11

Qix - MONICA WAS MISTREATED


People also ask

Is integer underflow possible?

Integer underflow: "Integer underflow" is sometimes used to identify signedness errors in which an originally positive number becomes negative as a result of subtraction. However, there are cases of bad subtraction in which unsigned integers are involved, so it's not always a signedness issue.

Can division overflow happen?

Division. Division is between two operands of arithmetic type. Overflow can occur during two's complement signed integer division when the dividend is equal to the minimum (negative) value for the signed integer type and the divisor is equal to −1 .

How do you know if its overflow or underflow?

Simply put, overflow and underflow happen when we assign a value that is out of range of the declared data type of the variable. If the (absolute) value is too big, we call it overflow, if the value is too small, we call it underflow.

What is integer overflow and underflow?

Integer Overflows and Underflows occur due to the input, whose size does not meet the boundaries of integer variables. While integer Overflows themselves are not dangerous, they can lead to other vulnerabilities when exploited.


1 Answers

Since the value is always either staying the same or getting smaller...

That's what I used to think, too, but when we say that, we're quietly assuming the denominator is positive.

And since the denominator can be negative, there's one obscure but ruinous case: under 2's complement arithmetic, the mathematical result of INT_MIN / -1 is a number that's one more than INT_MAX.

That is, on a 16-bit 2's complement machine, INT_MIN is −32768 which is perfectly representable, but −32768 ÷ −1 wants to be +32768, but INT_MAX is just 32767. Similarly, in 32 bits, −2147483648 is representable but can't be divided by −1, either.

This is a peculiar quirk of 2's complement arithmetic, arising because the magnitude of INT_MIN is not quite the same as INT_MAX. Under one's complement or sign/magnitude arithmetic, on the other hand, INT_MIN is exactly the negative of INT_MAX, so there's no "ruinous case", and as far as I know division is well defined for all inputs (well, except for zero in the denominator, of course).

like image 143
Steve Summit Avatar answered Oct 05 '22 11:10

Steve Summit