Our database is currently at 64 Gb and one of our apps started to fail with the following error:
System.Data.SqlClient.SqlException
: Could not allocate space for object'cnv.LoggedUnpreparedSpos'.'PK_LoggedUnpreparedSpos'
in database'travelgateway'
because the'PRIMARY'
filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
I double-checked everything: all files in a single filegroup are allowed to autogrow with a reasonable increments (100 Mb for a data file, 10% for a log file), more than 100 Gb of free space is available for the database, tempdb
is set to autogrow as well with plenty of free HDD space on its drive.
To resolve a problem, I added second file to the filegroup and the error has gone. But I feel uneasy about this whole situation.
Where' the problem here, guys?
tblFilegroup' in database 'DemoDatabase' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
To resolve this error, your DBA should increase the size of your ICM database using the SQL Server Manager and also the space alloted for TempDB. As the error text already mentions, this can be solved by adding additional files to the filegroup or setting the autogrowth on for existing filegroups.
There is a solution. You have to shift all content of the file you want change in another file of the same filegroup with DBCC SHRINKFIL(example_second_dat, EMPTYFILE) command, then remove the file, backup both DB and Log (or, eventually set the db recovery_model to simple), create the file with the right filegroup.
To remove defunct filegroupsSelect the Files page. In the Database files grid, select the files to delete, click Remove, and then click OK. Select the Filegroups page. In the Rows grid, select the filegroup to delete, click Remove, and then click OK.
OK, got it working. Turns out that an NTFS volume where the DB files were located got heavily fragmented. Stopped SQL Server, defragmented the whole thing and all it was fine ever since.
Anton,
As a best practice one should n't create user objects in the primary filegroup. When you have bandwidth, create a new file group and move the user objects and leave the system objects in primary.
The following queries will help you identify the space used in each file and the top tables that have highest number of rows and if there are any heaps. Its a good starting point to investigate this issue.
SELECT ds.name as filegroupname , df.name AS 'FileName' , physical_name AS 'PhysicalName' , size/128 AS 'TotalSizeinMB' , size/128.0 - CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'AvailableSpaceInMB' , CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'ActualSpaceUsedInMB' , (CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0)/(size/128)*100. as '%SpaceUsed' FROM sys.database_files df LEFT OUTER JOIN sys.data_spaces ds ON df.data_space_id = ds.data_space_id; EXEC xp_fixeddrives select t.name as TableName, i.name as IndexName, p.rows as Rows from sys.filegroups fg (nolock) join sys.database_files df (nolock) on fg.data_space_id = df.data_space_id join sys.indexes i (nolock) on df.data_space_id = i.data_space_id join sys.tables t (nolock) on i.object_id = t.object_id join sys.partitions p (nolock) on t.object_id = p.object_id and i.index_id = p.index_id where fg.name = 'PRIMARY' and t.type = 'U' order by rows desc select t.name as TableName, i.name as IndexName, p.rows as Rows from sys.filegroups fg (nolock) join sys.database_files df (nolock) on fg.data_space_id = df.data_space_id join sys.indexes i (nolock) on df.data_space_id = i.data_space_id join sys.tables t (nolock) on i.object_id = t.object_id join sys.partitions p (nolock) on t.object_id = p.object_id and i.index_id = p.index_id where fg.name = 'PRIMARY' and t.type = 'U' and i.index_id = 0 order by rows desc
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With