Possible Duplicate:
is there an advantage to varchar(500) over varchar(8000)?
I understand that a VARCHAR(200)
column containing 10 characters takes same amount of space as a VARCHAR(20)
column containing same data.
I want to know if changing a dozen VARCHAR(200)
columns of a specific table to VARCHAR(20)
would make the queries run faster, especially when:
ORDER BY
clauseWHERE
clause
WHERE
clausePS: I am using SQL Server 2000 but will upgrade to later versions of SQL anytime soon.
Yes, the length of varchar affects estimation of the query, memory that will be allocated for internal operation (for example for sorting) and as consequence resources of CPU. You can reproduce it with the following simple example.
1.Create two tables:
create table varLenTest1
(
a varchar(100)
)
create table varLenTest2
(
a varchar(8000)
)
2. Fill both of them with some data:
declare @i int
set @i = 20000
while (@i > 0)
begin
insert into varLenTest1 (a) values (cast(NEWID() as varchar(36)))
set @i = @i - 1
end
3. Execute the following queries with "include actual execution plan":
select a from varLenTest1 order by a OPTION (MAXDOP 1) ;
select a from varLenTest2 order by a OPTION (MAXDOP 1) ;
If you inspect execution plans of these queries, you can see that estimated IO cost and estimated CPU cost is very different:
Here's a blog post that explains under what circumstances and why there are performance differences when using different column sizes (with tests and technical details):
Advanced TSQL Tuning: Why Internals Knowledge Matters
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With