Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does the C# specification leave (int.MinValue / -1) implementation defined?

Tags:

The expression int.Minvalue / -1 results in implementation defined behavior according to the C# specification:

7.8.2 Division operator

If the left operand is the smallest representable int or long value and the right operand is –1, an overflow occurs. In a checked context, this causes a System.ArithmeticException (or a subclass thereof) to be thrown. In an unchecked context, it is implementation-defined as to whether a System.ArithmeticException (or a subclass thereof) is thrown or the overflow goes unreported with the resulting value being that of the left operand.

Test program:

var x = int.MinValue; var y = -1; Console.WriteLine(unchecked(x / y)); 

This throws an OverflowException on .NET 4.5 32bit, but it does not have to.

Why does the specification leave the outcome implementation-defined? Here's the case against doing that:

  1. The x86 idiv instruction always results in an exception in this case.
  2. On other platforms a runtime check might be necessary to emulate this. But the cost of that check would be low compared to the cost of the division. Integer division is extremely expensive (15-30 cycles).
  3. This opens compatibility risks ("write once run nowhere").
  4. Developer surprise.

Also interesting is the fact, that if x / y is a compiletime constant we indeed get unchecked(int.MinValue / -1) == int.MinValue:

Console.WriteLine(unchecked(int.MinValue / -1)); //-2147483648 

This means that x / y can have different behaviors depending on the syntactic form being used (and not only depending on the values of x and y). This is allowed by the specification but it seems like an unwise choice. Why was C# designed like this?

A similar question points out where in the specification this exact behavior is prescribed but it does not (sufficiently) answer why the language was designed this way. Alternative choices are not discussed.

like image 308
usr Avatar asked Aug 02 '15 18:08

usr


People also ask

Why does the C exist?

The letter c was applied by French orthographists in the 12th century to represent the sound ts in English, and this sound developed into the simpler sibilant s.

Why is C in the alphabet?

Back then, gamma had a curved shape like a C. The Romans inherited this tradition, but unlike the Etruscans, they did have a G sound. Since they were already using gamma for the K sound, they invented a new letter by adding a little line to the letter C.

Does the letter C need to exist?

This is a very important rule considering about 25% of words in our language contain a C.] So why do we need a C? When we combine the C with an H we DO make a unique sound. Without a C we would go to Hurch instead of Church, we would listen to a Hime instead of a Chime, etc.

Why does C have the sound of S?

Here's the rule: When 'c' comes directly before the letters 'e', 'i' or 'y' we use the /s/ sound. in other cases we use a /k/ sound.


2 Answers

This is a side-effect of the C# Language Specification's bigger brother, Ecma-335, the Common Language Infrastructure specification. Section III, chapter 3.31 describes what the DIV opcode does. A spec that the C# spec very often has to defer to, pretty inevitable. It specifies that it may throw but does not demand it.

Otherwise a realistic assessment of what real processors do. And the one that everybody uses is the weird one. Intel processors are excessively quirky about overflow behavior, they were designed back in the 1970s with the assumption that everybody would use the INTO instruction. Nobody does, a story for another day. It doesn't ignore overflow on an IDIV however and raises the #DE trap, can't ignore that loud bang.

Pretty tough to write a language spec on top of a woolly runtime spec on top of inconsistent processor behavior. Little that the C# team could do with that but forward the imprecise language. They already went beyond the spec by documenting OverflowException instead of ArithmeticException. Very naughty. They had a peek.

A peek that revealed the practice. It is very unlikely to be a problem, the jitter decides whether or not to inline. And the non-inlined version throws, expectation is that the inlined version does as well. Nobody has been disappointed yet.

like image 62
Hans Passant Avatar answered Sep 22 '22 20:09

Hans Passant


A principal design goal of C# is reputedly "Law of Minimum Surprise". According to this guideline the compiler should not attempt to guess the programmer's intent, but rather should signal to the programmer that additional guidance is needed to properly specify intent. This applies to the case of interest because, within the limitations of two's-complement arithmetic, the operation results in a very surprising result: Int32.MinValue / -1 evaluates to Int32.MinValue. An overflow has occurred and an unavailable 33'rd bit, of 0, would be required to properly represent the correct value of Int32.MaxValue + 1.

As expected, and noted in your quote, in a checked context an Exception is raised to alert the programmer to the failure to properly specify intent. In an unchecked context the implementation is allowed to either behave as in the checked context, or to allow the overflow and return the surprising result. There are certain contexts, such as bit-twiddling, in which it is convenient to work with signed int's but where the overflow behavious is actually expected and desired. By checking the implementation notes, the programmer can determine whether this behaviour is actually as expected.

like image 44
Pieter Geerkens Avatar answered Sep 25 '22 20:09

Pieter Geerkens