I am looking for a division result that is extremely accurate.
This SQL returns the following results:
SELECT (CAST(297282.26 AS DECIMAL(38, 30)) / CAST(495470.44 AS DECIMAL(38, 30))) AS ResultDecimal
SELECT (CAST(297282.26 AS FLOAT) / CAST(495470.44 AS FLOAT)) AS ResultFloat
Here is the accurate result from WolframAlpha: http://www.wolframalpha.com/input/?i=297282.26%2F495470.44
I was under the impression that DECIMAL would be more accurate than FLOAT:
"Because of the approximate nature of the float and real data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types."
https://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx
Why does the FLOAT calculation give me a result more accurate than when using DECIMAL?
Float is less accurate than Double and Decimal. Double is more accurate than Float but less accurate than Decimal. Decimal is more accurate than Float and Double.
With a data type, there is a limited number of bits. Those bits cannot accurately represent a value that requires more than that number of bits. The data type float has 24 bits of precision. This is equivalent to only about 7 decimal places.
Floating-point decimal values generally do not have an exact binary representation. This is a side effect of how the CPU represents floating point data. For this reason, you may experience some loss of precision, and some floating-point operations may produce unexpected results.
Decimal is more precise than double because it has more bits of precision.
I found the best precision to be when you use:
SELECT (CAST(297282.26 AS DECIMAL(15, 9)) / CAST(495470.44 AS DECIMAL(24, 2))) AS ResultDecimal
This gives a result of
0.599999991926864496699338915153
I think the actual value (to 100 digits) is:
0.5999999919268644966993389151530412187657451370862810705720405842980259326873264124495499670979362562...
Please bear in mind SQL Server defines the maximum precision and scale for division as:
max precision = (p1 - s1 + s2) + MAX(6, s1 + p2 + 1) -- up to 38
max scale = MAX(6, s1 + p2 + 1)
Where p1 & p2 are the precision of the two numbers and s1 & s2 are the scale of the numbers.
In this case the maximum precision is (15-9+2) + MAX(6, 9+24+1) = 8 + 34 = 42.
However SQL Server only allows a maximum precision of 38.
The maximum scale = MAX(6, 9+24+1) = 34
Hopefully you already understand that just because the FLOAT
version presents more numbers after the decimal point, doesn't necessarily mean that those are the true numbers. This is about precision, not accuracy.
It is the CAST
function itself that causes this loss of precision, not the difference between the FLOAT
and DECIMAL
data types.
To demonstrate this, compare your previous results to the result of this:
SELECT 297282.26 / 495470.44 AS ResultNoCast
In my version of the query, the presence of a decimal point in the literal numbers tells SQL Server to treat the values as DECIMAL
datatype, with appropriate length and precision as determined by the server. The result is more precise than when you CAST
explicitly to DECIMAL
.
A clue to the reason for this can be found hidden in the official documentation of the CAST
function, under Truncating and Rounding Results:
When you convert data types that differ in decimal places, sometimes the result value is truncated and at other times it is rounded. The following table shows the behavior.
From | To | Behavior
numeric | numeric | Round
So the fact that each separate literal value is treated as a NUMERIC
(same thing as DECIMAL
) on the way in, and is being casted to NUMERIC
, causes rounding.
Anticipating your next question a little, if you want a more precise result from the NUMERIC
/DECIMAL
datatype, you just need to tell SQL Server that each component of the calculation is more precise:
SELECT 297282.26000000 / 495470.44000000 AS ResultSuperPrecise
This appears (from experimentation) to be the most precise I can get: either adding or removing a 0 from either the numerator or denominator makes the result less precise. I'm at a loss to explain why that is, because the result is only 23 digits to the right of the decimal point.
It doesn't give you a more accurate result. I say that because the value is an approximate and not all values will be available to stored in a float. On the other side of that coin though is that float has the possibility of a lot more precision. The maximum precision of a decimal/numeric is 38. https://msdn.microsoft.com/en-us/library/ms187746.aspx
When you look at float though the maximum precision is 53. https://msdn.microsoft.com/en-us/library/ms173773.aspx
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With