we are currently looking at setting our string columns to nvarchar(max)
rather than specifying a specific length to prevent any problems where there could be not enough room in the database to store the string . Im just wondering if this is a good thing or could it cause any problems since it was ok to do then why specify a length like nvarchar(10)
rather than nvarchar(max)
. We also use varbinary(max)
a lot since we dont know how much binary data we will need so Im not sure how much this is an effect either give that our inserts are not as fast as I think they should be . This is an example table:
CREATE TABLE [dbo].[SAMPLETABLE] (
[ID] [uniqueidentifier] NOT NULL,
[FIELD1] [int] NOT NULL,
[FIELD2] [nvarchar] (2000) NULL,
[FIELD3] [nvarchar] (max) NULL,
[FIELD4] [uniqueidentifier] NULL,
[FIELD5] [int] NULL,
[FIELD6] [nvarchar] (2000) NULL,
[FIELD7] [varbinary] (max) NULL,
[FIELD8] [varbinary] (max) NULL,
[FIELD9] [varbinary] (max) NULL,
[FIELD10] [uniqueidentifier] NULL,
[FIELD11] [nvarchar] (2000) NULL,
[FIELD12] [varbinary] (max) NULL,
[FIELD13] [varbinary] (max) NULL,
[FIELD14] [bit] NULL,
[FIELD15] [uniqueidentifier] NULL,
[FIELD16] [varbinary] (max) NULL,
[FIELD17] [bit] NULL,
[FIELD18] [tinyint] NULL,
[FIELD19] [datetime] NULL,
[FIELD20] [nvarchar] (2000) NULL,
PRIMARY KEY CLUSTERED
(
[ID] ASC
)
) ON [PRIMARY]
GO
Given a table design like that and changing the nvarchar(2000)
to nvarchar(max)
would that make things any worse(or better)? Does sqlserver frown upon designs like this?
nvarchar [ ( n | max ) ] n defines the string size in byte-pairs, and can be a value from 1 through 4,000. max indicates that the maximum storage size is 2^30-1 characters (2 GB). The storage size is two times n bytes + 2 bytes.
So there is some additional processing needed when dealing with a VARCHAR(MAX) value. 1. You can create an index on the fixed length column but can not create on the nvarchar(max) column. If you are using the column in where clause in your queries the you should always use fixed length column instead of nvarchar(max).
The WebFOCUS MS Azure SQL Data Warehouse adapter does not have a size limit for TX columns, therefore, NVARCHAR(max) has no limit in the adapter, only the limit defined by Microsoft. The limit is n=32765, so anything above 32765 will be truncated.
The answers is: there is no different between nvarchar(7) and nvarchar(4000) in term of performance & storage size. There is an interesting thing is that: if you change nvarchar(7) or nvarchar(4000) to nvarchar(max). There is a difference in term of performance & storage size. Wow, Why is this happen?
If you're happy for J. Random Developer, 6 months down the line, to insert a work by Shakespeare into each column, then fine.
For me, a big part of data modelling is seriously thinking about what data I do want to allow in each column, and which data I wish to prohibit. I then apply appropriate CHECK
constraints to achieve those restrictions (as best SQL Server allows). Having a sensible length check available "for free" has always seemed like a bonus.
You're also not doing much "future proofing" - changing the length of a (n)varchar column to a larger value at a later date is, I believe, purely a meta-data operation. So I'd say size the columns appropriately for the data you're expecting to deal with today (and okay, for the next year or so). If you need to expand them later, it takes seconds to do.
Let's hope you don't use the column for searching or have unique values...
Indexes can not be over 900 bytes wide So you can probably never create an index. This is one downside: because it gives
It can be worked around with a computed column but then why not store what you need?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With