I was learning about allocation units in Sql server and came to know about 3 types
Row-overflow data.
IF EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE
TABLE_NAME='TEST_ALLOCATION_UNITS')
DROP TABLE TEST_ALLOCATION_UNITS
CREATE TABLE TEST_ALLOCATION_UNITS(
TEST_ID INT IDENTITY(1,1)
,NAME VARCHAR(4000)
,NAME1 VARCHAR(5000)
)
GO
INSERT INTO TEST_ALLOCATION_UNITS (NAME,NAME1) VALUES ('APPLE','BANANA')
GO 500
I had inserted 500 records in it
No i had ran the code and i got the result like this
select s.type_desc before_alter from sys.allocation_units s
join sys.partitions p on p.partition_id=s.container_id
where p.object_id=object_id('test_allocation_units')
Now I alter the table and reduced the both column size to 50 bytes each and again ran the above select statement
alter table test_allocation_units alter column name varchar(50)
alter table test_allocation_units alter column name1 varchar(50)
select s.type_desc from sys.allocation_units s
join sys.partitions p on p.partition_id=s.container_id
where p.object_id=object_id('test_allocation_units')
Question
As per my understanding before altering of a table the combined size of the columns for that table was grater than 8 Kb hence the allocation type was both In_row_data and Row_overflow_data. But after alter statement, i.e. reduction of column size to 50 bytes each then also the alloction unit was still both In_row_data and Row_overflow_data
please explain.
thanks in advance
Scenario 2
I tried with an empty table and run the following code
IF EXISTS (
SELECT 1
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'TEST_ALLOCATION_UNITS'
)
DROP TABLE TEST_ALLOCATION_UNITS
CREATE TABLE TEST_ALLOCATION_UNITS (
TEST_ID INT IDENTITY(1, 1)
,NAME VARCHAR(4000)
,NAME1 VARCHAR(5000)
)
GO
SELECT s.type_desc before_alter,s.total_pages,s.used_pages,s.data_pages
FROM sys.allocation_units s
JOIN sys.partitions p ON p.partition_id = s.container_id
WHERE p.object_id = object_id('test_allocation_units')
ALTER TABLE test_allocation_units
ALTER COLUMN NAME VARCHAR(50)
ALTER TABLE test_allocation_units
ALTER COLUMN name1 VARCHAR(50)
SELECT s.type_desc after_alter,s.total_pages,s.used_pages,s.data_pages
FROM sys.allocation_units s
JOIN sys.partitions p ON p.partition_id = s.container_id
WHERE p.object_id = object_id('test_allocation_units')
I didn't inserted any record in the table the results of the above script is shown in the figure
Please Advice why even after altering the column size the allocation unit was n't changed?
Show activity on this post. SQL server uses page to store data. Page size is 8kb. So a record size (row size) in SQL server cannot be greater than 8060 bytes.
Row-overflow considerations. A row can't reside on multiple pages, and can overflow if the combined size of variable-length data-type fields exceeds the 8060-byte limit. To illustrate, a table may be created with two columns: one varchar(7000) and another varchar (2000).
A database page is an 8 KB chunk of data. When you insert any data into a SQL Server database, it saves the data to a series of 8 KB pages inside the data file. If multiple data files exist within a filegroup, SQL Server allocates pages to all data files based on a round-robin mechanism.
To get the data "reallocated" you must run this command:
ALTER TABLE TEST_ALLOCATION_UNITS REBUILD
REBUILD advices SQL Server to reallocate Datapages on a HEAP TABLE.
To answer your question
row_overflow data was not converting to in_row data after altering or updating the table
, the table never had row_overflow data to start with, so there is nothing to convert when you alter the column type.
Add few more columns to your query:
select
s.type_desc
,s.total_pages
,s.used_pages
,s.data_pages
from
sys.allocation_units s
join sys.partitions p on p.partition_id=s.container_id
where
p.object_id=object_id('test_allocation_units')
You'll see that only IN_ROW_DATA
has non-zero number of pages, while ROW_OVERFLOW_DATA
has 0 pages reserved and used.
In any case, when you store "banana" value in varchar(5000)
column you are storing only 6 bytes per row (plus some overhead). So, changing this column type into varchar(50)
doesn't change anything substantial in the data pages.
There would be a difference in the number of pages that the table occupies on the disk, if the column had char(5000)
type and then changed into char(50)
. When you store "banana" value in the char(5000)
column, the value is right-padded with 4994 spaces to the declared length of the column, so here the declared size of the column has immediate impact.
Again, regardless, these spaces would be stored in-row. If you try to create a table with both char(4000)
and char(5000)
columns the creation would fail, because the maximum row size is 8060 bytes in SQL Server.
To see non-zero values for ROW_OVERFLOW_DATA
you need to store long strings in varchar
columns in your table. For example:
INSERT INTO TEST_ALLOCATION_UNITS (NAME,NAME1) VALUES
(REPLICATE('APPLE', 800)
,REPLICATE('BANANA', 833))
GO 500
In this case we are trying to store 4000 bytes in name
and 4998 bytes in name1
, which is 8998 in total (plus some extra overhead), which is more than maximum 8060 bytes, so the value is pushed off-row.
I'd recommend to try to add/delete few rows with short and long strings and check how the output of the view changes. This should help you understand the view's results.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With