Take a look at the following rules. Any arithmetic operation in between two integers will give an integer as an output. Any arithmetic operation in between integer and double will give out a double value.
The result of dividing a double by a double is a double. However, if both expressions have type int, then the result is an int.
Divide the first number of the dividend (or the two first numbers if the previous step took another digit) by the first digit of the divisor. Write the result of this division in the space of the quotient. Multiply the digit of the quotient by the divisor, write the result beneath the dividend and subtract it.
double num = 5;
That avoids a cast. But you'll find that the cast conversions are well-defined. You don't have to guess, just check the JLS. int to double is a widening conversion. From §5.1.2:
Widening primitive conversions do not lose information about the overall magnitude of a numeric value.
[...]
Conversion of an int or a long value to float, or of a long value to double, may result in loss of precision-that is, the result may lose some of the least significant bits of the value. In this case, the resulting floating-point value will be a correctly rounded version of the integer value, using IEEE 754 round-to-nearest mode (§4.2.4).
5 can be expressed exactly as a double.
What's wrong with casting primitives?
If you don't want to cast for some reason, you could do
double d = num * 1.0 / denom;
I don't like casting primitives, who knows what may happen.
Why do you have an irrational fear of casting primitives? Nothing bad will happen when you cast an int
to a double
. If you're just not sure of how it works, look it up in the Java Language Specification. Casting an int
to double
is a widening primitive conversion.
You can get rid of the extra pair of parentheses by casting the denominator instead of the numerator:
double d = num / (double) denom;
If you change the type of one the variables you have to remember to sneak in a double again if your formula changes, because if this variable stops being part of the calculation the result is messed up. I make a habit of casting within the calculation, and add a comment next to it.
double d = 5 / (double) 20; //cast to double, to do floating point calculations
Note that casting the result won't do it
double d = (double)(5 / 20); //produces 0.0
May be you will not do it explicitly but it will happen.
Now, there are several ways we can try to get precise double
value (where num
and denom
are int
type, and of-course with casting)-
double d = (double) num / denom;
double d = ((double) num) / denom;
double d = num / (double) denom;
double d = (double) num / (double) denom;
but not double d = (double) (num / denom);
double d = num * 1.0 / denom;
double d = num / 1d / denom;
double d = ( num + 0.0 ) / denom;
double d = num; d /= denom;
but not double d = num / denom * 1.0;
and not double d = 0.0 + ( num / denom );
Now if you are asking- Which one is better? explicit? or implicit?
Well, lets not follow a straight answer here. Simply remember- We programmers don't like surprises or magics in a source. And we really hate Easter Eggs.
Also, an extra operation will definitely not make your code more efficient. Right?
Cast one of the integers/both of the integer to float to force the operation to be done with floating point Math. Otherwise integer Math is always preferred. So:
1. double d = (double)5 / 20;
2. double v = (double)5 / (double) 20;
3. double v = 5 / (double) 20;
Note that casting the result won't do it. Because first division is done as per precedence rule.
double d = (double)(5 / 20); //produces 0.0
I do not think there is any problem with casting as such you are thinking about.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With