I'm learning C from K&R (Second Edition) and am confused by one of the book's early examples. In section 1.5.2, the book first exhibits a character-counting program that looks like this:
#include <stdio.h>
/* count characters in input; 1st version */
main()
{
long nc;
nc = 0;
while (getchar() != EOF)
++nc;
printf("%ld\n", nc);
}
and then remarks:
It may be possible to cope with even bigger numbers by using a
double
and exhibits this alternative version of the program:
#include <stdio.h>
/* count characters in input; 2nd version */
main()
{
double nc;
for (nc = 0; getchar() != EOF; ++nc)
;
printf("%.0f\n", nc);
}
Does using a double
here make any sense? It doesn't seem to; surely a long long
would be superior, since it can store bigger integers than a double can (without loss of precision) in the same space and helps readability by conveying at declaration time that the variable is an integer.
Is there some justification for using a double
here that I'm missing, or is the K&R example just plain bad code that's been shoehorned in to demonstrate the double
type?
Basically, you cannot lose precision when assigning an int to a double, because double has 52 bits of precision, which is enough to hold all int values. But float only has 23 bits of precision, so it cannot exactly represent all int values that are larger than about 2^23.
Here is an example of the loss of precision using double: The output should have been 20.20 (20 dollars and 20 cents), but the floating point calculation made it 20.19999999999996.
integers (“int”, “short”, “char”, “long”) - these can only store whole numbers - not numbers with anything after the decimal point. They have the advantage of being stored exactly though. “char” can only You certainly can store integer number into double data type. It’s not limited.
Basically, you cannot lose precision when assigning an int to a double, because double has 52 bits of precision, which is enough to hold all int values. But float only has 23 bits of precision, so it cannot exactly represent all int values that are larger than about 2^23. Show activity on this post.
double
vs. long
Is there any rational reason to use a double to store an integer when precision loss isn't acceptable? [...] Does using a double here make any sense?
Even in C2011, type long
may have as few as 31 value bits, so its range of representable values may be as small as from -231 to 231 - 1 (supposing two's complement representation; slightly narrower with sign/magnitude representation).
C does not specify details of the representation of floating-point values, but IEEE-754 representation is near-universal nowadays. C double
s are almost always represented in IEEE-754 binary double precision format, which provides 53 bits of mantissa. That format can exactly represent all integers from -(253 - 1) to 253 - 1, and arithmetic involving those numbers will be performed exactly if it is performed according to IEEE specifications and if the mathematical result and all intermediate values are exactly representable integers (and sometimes even when not).
So using double
instead of long
could indeed yield a much greater numeric range without sacrificing precision.
double
vs. long long
surely a
long long
would be superior [...]
long long
has a larger range of (exactly) representable integer values than double
, and therefore there is little reason to prefer double
over long long
for integers if the latter is available. However, as has been observed in comments, long long
did not exist in 1978 when the first edition of K&R was published, and it was far from standard even in 1988 when the second edition was published. Therefore, long long
was not among the alternatives Kernighan and Ritchie were considering. Indeed, although many C90 compilers eventually supported it as an extension, long long
was not standardized until C99.
In any case, I'm inclined to think that the remark that confused you was not so much an endorsement of using double
for the purpose, as a sidebar comment about the comparative range of double
.
In the old 32-bit computer, using "long long" is more expensive than "double". because using "long long" each 64-bit integer addition needs to be computed by 2 CPU instructions: "ADD" & "ADC". But by using "double" only one FPU addition is enough to increment the counter. And from the IEEE-754 standard, "double" has a precision of 53 bit (1-bit sign + 11 bit exponent + (52+1 implicit) bit mantissa), which is ok to represent any integer ranged in [-2^53, 2^53], inclusive.
While in the 64-bit computer, usually long long is better, but still there might be some situation that using "double" can perform faster. e.g, if you have hyper-threading enabled, both FPU and integer unit can be operating by different threads, at the same time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With