Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why don't languages raise errors on integer overflow by default?

In several modern programming languages (including C++, Java, and C#), the language allows integer overflow to occur at runtime without raising any kind of error condition.

For example, consider this (contrived) C# method, which does not account for the possibility of overflow/underflow. (For brevity, the method also doesn't handle the case where the specified list is a null reference.)

//Returns the sum of the values in the specified list.
private static int sumList(List<int> list)
{
    int sum = 0;
    foreach (int listItem in list)
    {
        sum += listItem;
    }
    return sum;
}

If this method is called as follows:

List<int> list = new List<int>();
list.Add(2000000000);
list.Add(2000000000);
int sum = sumList(list);

An overflow will occur in the sumList() method (because the int type in C# is a 32-bit signed integer, and the sum of the values in the list exceeds the value of the maximum 32-bit signed integer). The sum variable will have a value of -294967296 (not a value of 4000000000); this most likely is not what the (hypothetical) developer of the sumList method intended.

Obviously, there are various techniques that can be used by developers to avoid the possibility of integer overflow, such as using a type like Java's BigInteger, or the checked keyword and /checked compiler switch in C#.

However, the question that I'm interested in is why these languages were designed to by default allow integer overflows to happen in the first place, instead of, for example, raising an exception when an operation is performed at runtime that would result in an overflow. It seems like such behavior would help avoid bugs in cases where a developer neglects to account for the possibility of overflow when writing code that performs an arithmetic operation that could result in overflow. (These languages could have included something like an "unchecked" keyword that could designate a block where integer overflow is permitted to occur without an exception being raised, in those cases where that behavior is explicitly intended by the developer; C# actually does have this.)

Does the answer simply boil down to performance -- the language designers didn't want their respective languages to default to having "slow" arithmetic integer operations where the runtime would need to do extra work to check whether an overflow occurred, on every applicable arithmetic operation -- and this performance consideration outweighed the value of avoiding "silent" failures in the case that an inadvertent overflow occurs?

Are there other reasons for this language design decision as well, other than performance considerations?

like image 499
Jon Schneider Avatar asked Sep 19 '08 16:09

Jon Schneider


People also ask

Does Java language handle the overflow in its program?

Java doesn't do anything with integer overflow for either int or long primitive types and ignores overflow with positive and negative integers.

How can integer overflows be avoided?

Avoidance. By allocating variables with data types that are large enough to contain all values that may possibly be computed and stored in them, it is always possible to avoid overflow.

What happens when you overflow an integer?

When you go above the maximum value of the signed integer, the result usually becomes a negative number. For example, 2,147,483,647 +1 is usually −2,147,483,648.

What happens when integer overflow in C?

An integer overflow occurs when you attempt to store inside an integer variable a value that is larger than the maximum value the variable can hold. The C standard defines this situation as undefined behavior (meaning that anything might happen).


8 Answers

In C#, it was a question of performance. Specifically, out-of-box benchmarking.

When C# was new, Microsoft was hoping a lot of C++ developers would switch to it. They knew that many C++ folks thought of C++ as being fast, especially faster than languages that "wasted" time on automatic memory management and the like.

Both potential adopters and magazine reviewers are likely to get a copy of the new C#, install it, build a trivial app that no one would ever write in the real world, run it in a tight loop, and measure how long it took. Then they'd make a decision for their company or publish an article based on that result.

The fact that their test showed C# to be slower than natively compiled C++ is the kind of thing that would turn people off C# quickly. The fact that your C# app is going to catch overflow/underflow automatically is the kind of thing that they might miss. So, it's off by default.

I think it's obvious that 99% of the time we want /checked to be on. It's an unfortunate compromise.

like image 111
Jay Bazuzi Avatar answered Sep 24 '22 19:09

Jay Bazuzi


I think performance is a pretty good reason. If you consider every instruction in a typical program that increments an integer, and if instead of the simple op to add 1, it had to check every time if adding 1 would overflow the type, then the cost in extra cycles would be pretty severe.

like image 24
David Hill Avatar answered Sep 25 '22 19:09

David Hill


You work under the assumption that integer overflow is always undesired behavior.

Sometimes integer overflow is desired behavior. One example I've seen is representation of an absolute heading value as a fixed point number. Given an unsigned int, 0 is 0 or 360 degrees and the max 32 bit unsigned integer (0xffffffff) is the biggest value just below 360 degrees.

int main()
{
    uint32_t shipsHeadingInDegrees= 0;

    // Rotate by a bunch of degrees
    shipsHeadingInDegrees += 0x80000000; // 180 degrees
    shipsHeadingInDegrees += 0x80000000; // another 180 degrees, overflows 
    shipsHeadingInDegrees += 0x80000000; // another 180 degrees

    // Ships heading now will be 180 degrees
    cout << "Ships Heading Is" << (double(shipsHeadingInDegrees) / double(0xffffffff)) * 360.0 << std::endl;

}

There are probably other situations where overflow is acceptable, similar to this example.

like image 41
Doug T. Avatar answered Sep 22 '22 19:09

Doug T.


It is likely 99% performance. On x86 would have to check the overflow flag on every operation which would be a huge performance hit.

The other 1% would cover those cases where people are doing fancy bit manipulations or being 'imprecise' in mixing signed and unsigned operations and want the overflow semantics.

like image 27
Rob Walker Avatar answered Sep 22 '22 19:09

Rob Walker


C/C++ never mandate trap behaviour. Even the obvious division by 0 is undefined behaviour in C++, not a specified kind of trap.

The C language doesn't have any concept of trapping, unless you count signals.

C++ has a design principle that it doesn't introduce overhead not present in C unless you ask for it. So Stroustrup would not have wanted to mandate that integers behave in a way which requires any explicit checking.

Some early compilers, and lightweight implementations for restricted hardware, don't support exceptions at all, and exceptions can often be disabled with compiler options. Mandating exceptions for language built-ins would be problematic.

Even if C++ had made integers checked, 99% of programmers in the early days would have turned if off for the performance boost...

like image 32
Steve Jessop Avatar answered Sep 23 '22 19:09

Steve Jessop


Because checking for overflow takes time. Each primitive mathematical operation, which normally translates into a single assembly instruction would have to include a check for overflow, resulting in multiple assembly instructions, potentially resulting in a program that is several times slower.

like image 40
Dima Avatar answered Sep 26 '22 19:09

Dima


Backwards compatibility is a big one. With C, it was assumed that you were paying enough attention to the size of your datatypes that if an over/underflow occurred, that that was what you wanted. Then with C++, C# and Java, very little changed with how the "built-in" data types worked.

like image 44
Eclipse Avatar answered Sep 24 '22 19:09

Eclipse


If integer overflow is defined as immediately raising a signal, throwing an exception, or otherwise deflecting program execution, then any computations which might overflow will need to be performed in the specified sequence. Even on platforms where integer overflow checking wouldn't cost anything directly, the requirement that integer overflow be trapped at exactly the right point in a program's execution sequence would severely impede many useful optimizations.

If a language were to specify that integer overflows would instead set a latching error flag, were to limit how actions on that flag within a function could affect its value within calling code, and were to provide that the flag need not be set in circumstances where an overflow could not result in erroneous output or behavior, then compilers could generate more efficient code than any kind of manual overflow-checking programmers could use. As a simple example, if one had a function in C that would multiply two numbers and return a result, setting an error flag in case of overflow, a compiler would be required to perform the multiplication whether or not the caller would ever use the result. In a language with looser rules like I described, however, a compiler that determined that nothing ever uses the result of the multiply could infer that overflow could not affect a program's output, and skip the multiply altogether.

From a practical standpoint, most programs don't care about precisely when overflows occur, so much as they need to guarantee that they don't produce erroneous results as a consequence of overflow. Unfortunately, programming languages' integer-overflow-detection semantics have not caught up with what would be necessary to let compilers produce efficient code.

like image 26
supercat Avatar answered Sep 26 '22 19:09

supercat