I did some testing with floating point calculations to minimize the precision loss. I stumbled across a phenomen I want to show here and hopefully get an explanation.
When I write
print 1.0 / (1.0 / 60.0)
the result is
60.0024000960
When I write the same formula and do explicit casting to float
print cast(1.0 as float) / (cast(1.0 as float) / cast(60.0 as float))
the result is
60
Until now I thought that numeric literals with decimal places are automatically treated as float
values with the appropriate precision. Casting to real
shows the same result as casting to float
.
float
get better precision (which sounds like irony to me :)?SQL Server uses the smallest possible datatype.
When you run this script
SELECT SQL_VARIANT_PROPERTY(1.0, 'BaseType')
SELECT SQL_VARIANT_PROPERTY(1.0, 'Precision')
SELECT SQL_VARIANT_PROPERTY(1.0, 'Scale')
SELECT SQL_VARIANT_PROPERTY(1.0, 'TotalBytes')
you'll see that SQL Server implicitly used a NUMERIC(2, 1) datatype.
The division by 60.0 converts the result to NUMERIC(8, 6).
The final calculation converts the result to NUMERIC(17, 10).
Edit
Taken from SQL Server Books Online Data Type Conversion
In Transact-SQL statements, a constant with a decimal point is automatically converted into a numeric data value, using the minimum precision and scale necessary. For example, the constant 12.345 is converted into a numeric value with a precision of 5 and a scale of 3.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With