At my company, we have a legacy database with various tables and therefore many, many fields.
A lot of the fields seem to have large limits (ex: NVARCHAR(MAX)
) that are never reached.
Does arbitrarily making the fields their maximum width or 2 to 3 times larger than what is normally inputted negatively affect performance?
How should one balance performance with field lengths? Is there a balance?
There's two parts to this question:
Does using NVARCHAR over VARCHAR hurt performance? Yes, storing data in unicode fields doubles the storage requirements. Your data stored in those fields is 2x the size it needs to be (until SQL Server 2008 R2 came out, which includes unicode compression. Your table scans will take twice as long and only half as much data can be stored in memory in the buffer cache.
Does using MAX hurt performance? Not directly, but when you use VARCHAR(MAX), NVARCHAR(MAX), and those kinds of fields, and if you need to index the table, you won't be able to rebuild those indexes online in SQL Server 2005/2008/R2. (Denali brings some improvements around tables with MAX fields so some indexes can be rebuilt online.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With