What difference does it make when I use float and decimal data types in MySQL?.
When should I use which?
A float has 7 decimal digits of precision and occupies 32 bits . A double is a 64-bit IEEE 754 double-precision floating-point number. 1 bit for the sign, 11 bits for the exponent, and 52 bits for the value. A double has 15 decimal digits of precision and occupies a total of 64 bits .
The main difference is Floats and Doubles are binary floating point types and a Decimal will store the value as a floating decimal point type. So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy.
Integers and floats are two different kinds of numerical data. An integer (more commonly called an int) is a number without a decimal point. A float is a floating-point number, which means it is a number that has a decimal place. Floats are used when more precision is needed.
This is what I found when I had this doubt.
mysql> create table numbers (a decimal(10,2), b float); mysql> insert into numbers values (100, 100); mysql> select @a := (a/3), @b := (b/3), @a * 3, @b * 3 from numbers \G *************************** 1. row *************************** @a := (a/3): 33.333333333 @b := (b/3): 33.333333333333 @a + @a + @a: 99.999999999000000000000000000000 @b + @b + @b: 100
The decimal did exactly what's supposed to do on this cases, it truncated the rest, thus losing the 1/3 part.
So for sums the decimal is better, but for divisions the float is better, up to some point, of course. I mean, using DECIMAL will not give you a "fail proof arithmetic" in any means.
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With