Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does C# allow an *implicit* conversion from Long to Float, when this could lose precision?

A similar question Long in Float, why? here does not answer what I am searching for.

C# standard allows implicit conversion from long to float. But any long greater than 2^24 when represented as a float is bound to lose its 'value'. C# standard clearly states that long to float conversion may lose 'precision' but will never lose 'magnitude'.

My Questions are
  1. In reference to integral types what is meant by 'precision' and 'magnitude'. Isn't number n totally different from number n+1 unlike real numbers where 3.333333 and 3.333329 may be considered close enough for a calculation (i.e. depending on what precision programmer wants)
  2. Isn't allowing implicit conversion from long to float an invitation to subtle bugs as it can lead a long to 'silently' lose value (as a C# programmer I am accustomed to compiler doing an excellent job in guarding me against such issues)

So what could have been the rationale of C# language design team in allowing this conversion as implicit? What is it that I am missing here that justifies implicit conversion from long to float?

like image 998
Amit Mittal Avatar asked Jun 25 '12 05:06

Amit Mittal


3 Answers

This is a good question. Actually you can generalize this question, since the same issue exists for the implicit conversions of:

  • int to float
  • uint to float
  • long to float (which you're asking about)
  • ulong to float
  • long to double
  • ulong to double.

In fact, all integral types (and even char!!) have an implicit conversion to float and double; however, only the conversions listed above cause a loss of precision. Another interesting thing to note is that the C# language spec has a self-conflicting argument when explaining "why there is no implicit conversion from decimal to double":

The decimal type has greater precision but smaller range than the floating-point types. Thus, conversions from the floating-point types to decimal might produce overflow exceptions, and conversions from decimal to the floating-point types might cause loss of precision. For these reasons, no implicit conversions exist between the floating-point types and decimal, and without explicit casts, it is not possible to mix floating-point and decimal operands in the same expression.

The question of "why this decision was made" could best be answered by someone like Eric Lippert, I think. My best guess... this was one of those things where the language designers didn't have any strong arguments for going one way or the other, so they picked (what they thought was) the better of the alternatives, although that is arguable. In their defense, when you convert a large long to float, you do loose precision, but you still get what is the best representation of that number in the floating-point world. It is nothing like converting, say, an int to byte where there could be an overflow (the integer value is possibly outside the bounds of what a byte can represent) and you get an unrelated/wrong number. But still, in my opinion, it would have been more consistent with not having implicit conversions from decimal to floating-point, if they didn't also have these other conversions that cause loss of precision.

like image 151
Eren Ersönmez Avatar answered Oct 03 '22 02:10

Eren Ersönmez


In general, floating point numbers don't represent many numbers exactly. By their nature they are inexact and subject to precision errors. It really doesn't add value to warn you about what is always the case with floating point.

like image 33
kenny Avatar answered Oct 03 '22 02:10

kenny


  1. In reference to integral types what is meant by 'precision' and 'magnitude'. Isn't number n totally different from number n+1 unlike real numbers where 3.333333 and 3.333329 may be considered close enough for a calculation (i.e. depending on what precision programmer wants)

'Precision' defines the amount of digits a number can carry. One byte can only carry 2 decimal digits if you (for easiness) encode them in BCD. Lets say you have 2 bytes available. You can use them to encode the numbers 0-9999 in an integer format or you can define a format where the last digit means the decimal exponent.

You can encode then 0-999 * (10^0 - 10^9)

Instead of encoding numbers from 0-9999 you can encode now numbers up to 999 000 000 000. But if you are casting 9999 from your integer format to your new format, you get only 9990. You have gained the span of possible numbers (your magnitude), but you lost precision.

With doubles and float you have following values which can be exactly represented: (int = 32 bits, long = 64 bits, both signed:)

int -> float -2^24 - 2^24

int -> double all values

long -> float -2^24 - 2^24

long -> double -2^53 - 2^53

Isn't allowing implicit conversion from long to float an invitation to subtle bugs as it > can lead a long to 'silently' lose value (as a C# programmer I am accustomed to compiler > doing an excellent job in guarding me against such issues)

Yes, it introduces silent bugs. If you expect that the Compiler gives you any help against these issues, forget it. You are on your own. I don't know any language which warns against losing precision.

One such bug: Ariane rocket...

like image 23
Thorsten S. Avatar answered Oct 03 '22 02:10

Thorsten S.