celsius = (5.0/9.0) * (fahr-32.0);
Is it just a development choice that the C developers decided upon or is there a reason to this? I believe a float is smaller than a double, so it might be to prevent overflows caused by not knowing what decimal format to use. Is that the reason, or am I overlooking something?
double is mostly used for calculations in programming to eliminate errors when decimal values are being rounded off. Although float can still be used, it should only be in cases when we're dealing with small decimal values.
Double is more precise than float and can store 64 bits; double the number of bits float can store. We prefer double over float if we need to do precision up to 15 or 16 decimal points; otherwise, we can stick to float in most applications, as double is more expensive.
The double in C is a data type that is used to store high-precision floating-point data or numbers (up to 15 to 17 digits). It is used to store large values of decimal numbers. Values that are stored are double the size of data that can be stored in the float data type.
A double is named such because it is double the "precision" of a float. Really, what this means is that it uses twice the space of a floating point value -- if your float is a 32-bit, then your double will be a 64-bit.
celsius = (5.0/9.0) * (fahr-32.0);
In this expression, 5.0
, 9.0
, and 32.0
are double
s. That's the default type for a floating-point constant - if you wanted them to be float
s, then you would use the F
suffix:
celsius = (5.0F/9.0F) * (fahr-32.0F);
Note that if fahr
was a double
, then the result of this last expression would still be a double
: as Vaibhav noted, types are promoted in such a way as to avoid potentially losing precision.
I think the reason is to ensure that any result can be encompassed. so the natural choice is double as it is the largest data type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With