Why is SQL Server losing precision on decimal division?
Sorry about the format, I'm posting this from my cellphone...
declare @base decimal(38,10) = -3062.325,
@div decimal(38,10) = 25812561468.9017,
@mult decimal(38,10) = 103177265901.524
select @base / @div * @mult
Result: 0
declare @base decimal(28,10) = -3062.325,
@div decimal(28,10) = 25812561468.9017,
@mult decimal(28,10) = 103177265901.524
select @base / @div * @mult
Result: -12236.823736
declare @base float = -3062.325,
@div float = 25812561468.9017,
@mult float = 103177265901.524
select @base / @div * @mult
Result: -12240.6418744047
The weirdest thing is that float returns the same result as Excel and Wolfram alpha ...
Excel result: -12240.64187
In SQL Server decimal precision and scale of result depend on precision and scale of operands

"The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, it is reduced to 38, and the corresponding scale is reduced to try to prevent the integral part of a result from being truncated. In some cases such as multiplication or division, scale factor will not be reduced in order to keep decimal precision, although the overflow error can be raised."
see Precision, scale, and Length
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With