MSDN says about precision and scale of decimal multiplicatuion result:
- The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
So when we execute this:
DECLARE @a DECIMAL(18,9)
DECLARE @b DECIMAL(19,9)
set @a = 1.123456789
set @b = 1
SELECT @a * @b
the result is 1.12345689000000000 (9 zeros) and we see that it is not truncated because 18 + 19 + 1 = 38 (up limit).
When we raise precision of @a to 27 we lose all zeros and the result is just 1.123456789. Going futher we proceed with truncating and get the result being rounded. For example, raising precision of @a to 28 results in 1.12345679 (8 digits).
The interesting thing is that at some point, with precision equal to 30, we have 1.123457 and this result won't change any futher (it stops being truncated).
31, 32 and up to 38 results in the same. How could this be explained?
Decimal and numeric operation results have a minimum scale of 6 - this is specified in the table of the msdn documentation for division, but the same behavior applies for multiplication as well in case of scale truncation as in your example.
This behavior is described in more detail on the sqlprogrammability blog.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With