Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between varchar(5) and varchar(5000)? [duplicate]

Tags:

sql-server

Possible Duplicate:
is there an advantage to varchar(500) over varchar(8000)?

In SQL Server we can specify the field type varchar(n):

Storage size is the actual length in bytes of the data entered, not n bytes.

Often when planning a database structure I try to imagine the maximum possible length of strings which are to be stored in these varchar fields and then set that imagined size + some backup for the varchar field. Sometimes it may occur that the initially provided room is not enough for the field and then I need to increase the length.

But actually, is it worth limiting the varchar field to some value and not just set something like varchar(5000) or even varchar(max) which will 100% suit the case? Is there any advantage maybe in page organization in SQL Server or index organization when I limit that n length?

Update

Here is what I mean (the advantage): https://stackoverflow.com/a/5353196/355264

Yes, the query optimizer can guess how many rows fit in a page, if you have a lot of varchar fields that are larger than necessary, SQL Server can internally guess the wrong number of rows.

Is the above true? And are there any other advantages?

like image 1000
net_prog Avatar asked Jan 27 '12 12:01

net_prog


People also ask

Is varchar Max the same as varchar 8000?

VARCHAR(MAX) is different from VARCHAR because it supports character strings up to 2 GB (2,147,483,647 bytes) in length. You should consider using VARCHAR(MAX) only when each string stored in this data type varies considerably in length, and a value might exceed 8000 bytes in size.

What is the maximum length of characters in CHAR and varchar data types A 5000 B 5500 C 8000 D 10000?

The length can be specified as a value from 0 to 65,535. The effective maximum length of a VARCHAR is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used.

What does varchar 512 mean?

This means, that in a varchar column for each row one additional byte is used to store how long the string in the row actually is. So the difference between varchar(1024) and varchar(512) is, that your data gets truncated when you try to insert more than 1024 or 512 bytes. Note: bytes, not characters.

What does varchar 50 mean?

Varchar(50) stores a maximum of 50 characters. Varchar(max) stores a maximum of 2,147,483,647 characters. But, varchar(50) keeps the 50 character space even if you don't store 50 characters. but varchar(max) is flexible to any size.


2 Answers

For one thing - you CANNOT index any column that's over 900 bytes in size.

That alone (to me) is a killer criteria against using varchar(5000) (or varchar(max)) all the time, even if not needed....

Use the appropriate size.

Also: varchar adds at least 2 bytes of overhead, so for strings of 10 characters or less, I would always recommend using char(n) instead.

char(5) will use 5 bytes of storage - always. varchar(5) uses between 2 and 7 bytes of storage.

like image 118
marc_s Avatar answered Oct 19 '22 22:10

marc_s


MS SQL Server stores varchar(8000) and shorter differently from varchar(MAX).

The size at which MAX types are moved out of a table's data pages and into its own data pages is now fixed at 8,000 bytes if [large value types out of row] is set to OFF.

  • http://www.fotia.co.uk/fotia/DY.13.VarCharMax.aspx

There are several reasons why you would use use varchar(n) rather than varchar(MAX)

Performance

varchar(n) is faster http://rusanu.com/2010/03/22/performance-comparison-of-varcharmax-vs-varcharn/

Indexing

varchar(900) and smaller can be indexed, but not varchar(MAX) or varchar(901) or larger

Compatibility

MS SQL Server Compact Edition does not support varchar(MAX)

like image 42
Jeow Li Huan Avatar answered Oct 19 '22 22:10

Jeow Li Huan