Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Declaring floats, why default type double?

Tags:

java

I am curious as to why float literals must be declared as so:

float f = 0.1f; 

Instead of

float f = 0.1; 

Why is the default type a double, why can't the compiler infer that it is a float from looking at the leftside of the assignment? Google only turns up explanation on what the default values are, not why they are so.

like image 413
arynaq Avatar asked May 04 '13 01:05

arynaq


People also ask

Is float default or double?

Remember, by default, floating-point numbers are double in Java; if you want to store them into a float variable, you need to either cast them explicitly or suffice them using the 'f' or 'F' character. Both Java float vs Double is used to represent real numbers in Java, i.e. numbers with fractions or decimal points.

Can every float be represented as a double?

11 Answers. Show activity on this post. Yes.

What is difference between float and double data type?

Difference in Precision (Accuracy) float and double both have varying capacities when it comes to the number of decimal digits they can hold. float can hold up to 7 decimal digits accurately while double can hold up to 15.

What is difference between float and double in Java?

Size: Float is of size 32 bits while double is of size 64 bits. Hence, double can handle much bigger fractional numbers than float. They differ in the allocation of bits for the representation of the number. Both float and double use 1 bit for representing the sign of the number.


1 Answers

Why is the default type a double?

That's a question that would be best asked of the designers of the Java language. They are the only people who know the real reasons why that language design decision was made. But I expect that the reasoning was something along the following lines:

They needed to distinguish between the two types of literals because they do actually mean different values ... from a mathematical perspective.

Supposing they made "float" the default for literals, consider this example

// (Hypothetical "java" code ... ) double d = 0.1; double d2 = 0.1d; 

In the above, the d and d2 would actually have different values. In the first case, a low precision float value is converted to a higher precision double value at the point of assignment. But you cannot recover precision that isn't there.

I posit that a language design where those two statements are both legal, and mean different things is a BAD idea ... considering that the actual meaning of the first statement is different to the "natural" meaning.

By doing it the way they've done it:

double d = 0.1f; double d2 = 0.1; 

are both legal, and mean different things again. But in the first statement, the programmer's intention is clear, and the second statement the "natural" meaning is what the programmer gets. And in this case:

float f = 0.1f; float f2 = 0.1;    // compilation error! 

... the compiler picks up the mismatch.


I am guessing using floats is the exception and not the rule (using doubles instead) with modern hardware so at some point it would make sense to assume that the user intends 0.1f when he writes float f = 0.1;

They could do that already. But the problem is coming up with a set of type conversion rules that work ... and are simple enough that you don't need a degree in Java-ology to actually understand. Having 0.1 mean different things in different context would be confusing. And consider this:

void method(float f) { ... } void method(double d) { ... }  // Which overload is called in the following? this.method(1.0); 

Programming language design is tricky. A change in one area can have consequences in others.


UPDATE to address some points raised by @supercat.

@supercat: Given the above overloads, which method will be invoked for method(16777217)? Is that the best choice?

I incorrectly commented ... compilation error. In fact the answer is method(float).

The JLS says this:

15.12.2.5. Choosing the Most Specific Method

If more than one member method is both accessible and applicable to a method invocation, it is necessary to choose one to provide the descriptor for the run-time method dispatch. The Java programming language uses the rule that the most specific method is chosen.

...

[The symbols m1 and m2 denote methods that are applicable.]

[If] m2 is not generic, and m1 and m2 are applicable by strict or loose invocation, and where m1 has formal parameter types S1, ..., Sn and m2 has formal parameter types T1, ..., Tn, the type Si is more specific than Ti for argument ei for all i (1 ≤ i ≤ n, n = k).

...

The above conditions are the only circumstances under which one method may be more specific than another.

A type S is more specific than a type T for any expression if S <: T (§4.10).

In this case, we are comparing method(float) and method(double) which are both applicable to the call. Since float <: double, it is more specific, and therefore method(float) will be selected.

@supercat: Such behavior may cause problems if e.g. an expression like int2 = (int) Math.Round(int1 * 3.5) or long2 = Math.Round(long1 * 3.5) gets replaced with int1 = (int) Math.Round(int2 * 3) or long2 = Math.Round(long1 * 3)

The change would look harmless, but the first two expressions are correct up to 613566756 or 2573485501354568 and the latter two fail above 5592405 [the last being completely bogus above 715827882].

If you are talking about a person making that change ... well yes.

However, the compiler won't make that change behind your back. For example, int1 * 3.5 has type double (the int is converted to a double), so you end up calling the Math.Round(double).

As a general rule, Java arithmetic will implicitly convert from "smaller" to "larger" numeric types, but not from "larger" to "smaller".

However, you do still need to be careful since (in your rounding example):

  • the product of a integer and floating point may not be representable with sufficient precision because (say) a float has fewer bits of precision than an int.

  • casting the result of Math.round(double) to an integer type can result in conversion to the smallest / largest value of the integer type.

But all of this illustrates that arithmetic support in a programming language is tricky, and there are inevitable gotcha's for a new or unwary programmer.

like image 90
Stephen C Avatar answered Sep 16 '22 13:09

Stephen C