Years ago I learned the hard way about precision problems with floats so I quit using them. However, I still run into code using floats and it make me cringe because I know some of the calculations will be inaccurate.
So, when is it appropriate to use a float?
EDIT: As info, I don't think that I've come across a program where the accuracy of a number isn't important. But I would be interested in hearing examples.
Double is more precise than float and can store 64 bits, double of the number of bits float can store. Double is more precise and for storing large numbers, we prefer double over float. For example, to store the annual salary of the CEO of a company, double will be a more accurate choice.
double has higher precision, whereas floats take up less memory and are faster. In general you should use float unless you have a case where it isn't accurate enough. On typical modern computers, double is just as fast as float.
Since the early days of computer programming, floats have provided the ability to hold numbers including decimal fractions as data types. Understanding the nature of the float is vital in type conversion, in declaring variables, and in using variables within a codebase.
If you have larger numbers, or if there is some case where you purposefully want to force only integer values with no fractional numbers, int is the way to go. If you don't care about loss of precision and want to allow a wider range of values, you can use float instead.
Short answer: You only have to use a float when you know exactly what you're doing and why.
Long answer: floats (as opposed to doubles) aren't really used anymore outside 3D APIs as far as I know. Floats and doubles have the same performance characteristics on modern CPUs, doubles are somewhat bigger and that's all. If in doubt, just use double.
Oh yes, and use decimal for financial calculations, of course.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With