What's the difference between the SQL datatype NUMERIC
and DECIMAL
?
If databases treat these differently, I'd like to know how for at least:
Furthermore, are there any differences in how database drivers interpret these types?
In Transact-SQL, numeric is functionally equivalent to the decimal data type. Use the decimal data type to store numbers with decimals when the data values must be stored exactly as specified. The behavior of float and real follows the IEEE 754 specification on approximate numeric data types.
Answer. An integer, also called a "round number" or “whole number,” is any positive or negative number that does not include decimal parts or fractions. For example, 3, -10, and 1,025 are all integers, but 2.76 (decimal), 1.5 (decimal), and 3 ½ (fraction) are not.
Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and selectable-precision decimals.
Well, Integer type can contain only whole numbers, like 5 or 123. Numeric type can contain decimal numbers like 15.39.
They are the same for almost all purposes.
At one time different vendors used different names (NUMERIC
/DECIMAL
) for almost the same thing. SQL-92 made them the same with one minor difference which can be vendor specific:
NUMERIC
must be exactly as precise as it is defined — so if you define 4 decimal places to the left of the decimal point and 4 decimal places to the right of it, the DB must always store 4 + 4 decimal places, no more, no less.
DECIMAL
is free to allow higher numbers if that's easier to implement. This means that the database can actually store more digits than specified (due to the behind-the-scenes storage having space for extra digits). This means the database might allow storing 12345.0000
in the above example of 4 + 4 decimal places, but storing 1.00005
is still not allowed if doing so could affect any future calculations.
Most current database systems treat DECIMAL
and NUMERIC
either as perfect synonyms, or as two distinct types with exactly the same behavior. If the types are considered distinct at all, you might not be able to define a foreign key constrain on a DECIMAL
column referencing a NUMERIC
column or vice versa.
They are synonyms, no difference at all.
At least on SQL Server in the ANSI SQL standards. This SO answer shows some difference in ANSI but I suspect in implementation they are the same
Postgres: No difference
in documentation description in table 8.1 looks same, yet it is not explained why it is mentioned separately, so according to Tom Lane post
There isn't any difference, in Postgres. There are two type names because the SQL standard requires us to accept both names. In a quick look in the standard it appears that the only difference is this:
17)NUMERIC specifies the data type exact numeric, with the decimal precision and scale specified by the <precision> and <scale>. 18)DECIMAL specifies the data type exact numeric, with the decimal scale specified by the <scale> and the implementation-defined decimal precision equal to or greater than the value of the specified <precision>.
ie, for DECIMAL the implementation is allowed to allow more digits than requested to the left of the decimal point. Postgres doesn't exercise that freedom so there's no difference between these types for us.
regards, tom lane
also a page lower docs state clearly, that
The types decimal and numeric are equivalent. Both types are part of the SQL standard.
and also at aliases table decimal [ (p, s) ]
is mentioned as alias for numeric [ (p, s) ]
They are actually equivalent, but they are independent types, and not technically synonyms, like ROWVERSION and TIMESTAMP - though they may have been referred to as synonyms in the documentation at one time. That is a slightly different meaning of synonym (e.g. they are indistinguishable except in name, not one is an alias for the other). Ironic, right?
What I interpret from the wording in MSDN is actually: These types are identical, they just have different names.
Other than the type_id values, everything here is identical:
SELECT * FROM sys.types WHERE name IN (N'numeric', N'decimal');
I have absolutely no knowledge of any behavioral differences between the two, and going back to SQL Server 6.5, have always treated them as 100% interchangeable.
for DECIMAL(18,2) and NUMERIC(18,2)? Assigning one to the other is technically a "conversion"?
Only if you do so explicitly. You can prove this easily by creating a table and then inspecting the query plan for queries that perform explicit or - you might expect - implicit conversions. Here's a simple table:
CREATE TABLE [dbo].[NumDec]
(
[num] [numeric](18, 0) NULL,
[dec] [decimal](18, 0) NULL
);
Now run these queries and capture the plan:
DECLARE @num NUMERIC(18,0);
DECLARE @dec DECIMAL(18,0);
SELECT
CONVERT(DECIMAL(18,0), [num]), -- conversion
CONVERT(NUMERIC(18,0), [dec]) -- conversion
FROM dbo.NumDec
UNION ALL SELECT [num],[dec]
FROM dbo.NumDec WHERE [num] = @dec -- no conversion
UNION ALL SELECT [num],[dec]
FROM dbo.NumDec WHERE [dec] = @num; -- no conversion
we have explicit conversions where we asked for them, but no explicit conversions where we might have expected them. Seems the optimizer is treating them as interchangeable, too.
Personally, I prefer to use the term DECIMAL just because it's much more accurate and descriptive. BIT is "numeric" too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With