In SQL Server you can use FLOAT
or REAL
to store floating point values the storage format of which is cleared defined by the IEEE 754 standard. For fixed point values we can use DECIMAL
type (which has a synonym NUMERIC
). However I'm not pretty sure how SQL Server store DECIMAL
values internally. For example, if I define a table and insert a row like this:
IF OBJECT_ID('dbo.test_number_types') IS NOT NULL DROP TABLE dbo.test_number_types;
CREATE TABLE dbo.test_number_types
(
id INT IDENTITY(1, 1),
c1 NUMERIC(5, 4)
)
GO
INSERT INTO dbo.test_number_types(c1)VALUES(5.7456);
When I use DBCC PAGE
command to check how SQL Server stores the number 5.7456, I got this:
01 70 E0 00 00
This hex string should use little endian. I can't figure out how SQL Server turns 5.7456 into 01 70 E0 00 00
and how it decides how many bytes are for the integral part and how many bytes are for the decimal parts. Can anyone help?
BTW, I've checked the book "SQL Server 2012 Internals". There is a chapter dedicated to data type storage. But it seems DECIMAL
type storage is not mentioned in the book.
Martin's Smith comment gives you the clue. SQL Server does not use BCD.
It stores the data as a whole number, without the decimal place (which it can do because the decimal place is stored in the metadata for the column). So 5.7456
is stored as 57456
, or 0xE070
. After SQL's infamous byte swapping this is transformed to 70 E0 00 00
.
The leading 01
is the sign. 01
is used for positive numbers; 00
for negatives.
(However, I must ask - why do you need this? In typical use, you should never need to bother with SQL Server's internals)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With