Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL Server: the maximum number of rows in table [closed]

I develop software that stores a lot of data in one of its database tables (SQL Server version 8, 9 or 10). Let's say, about 100,000 records are inserted into that table per day. This is about 36 million records per year. For fear that I would lose on performance, I decided to create a new table everyday (a table with current date in its name) to lower the number of records per table.

Could you please tell me, whether it was a good idea? Is there a record limit for SQL server tables? Or do you know how many records (more or less) can be stored in a table before performance is lowered significantly?

like image 344
Mariusz Schimke Avatar asked Apr 17 '09 06:04

Mariusz Schimke


People also ask

Is there a limit to number of rows in SQL table?

Row Size Limits. The maximum row size for a given table is determined by several factors: The internal representation of a MySQL table has a maximum row size limit of 65,535 bytes, even if the storage engine is capable of supporting larger rows.

What is the maximum number of rows in a table?

1 byte is used to store the slot number, a single page can have at most 255 slots/rows. The maximum number of rows in a table or fragment is 4,278,189,825. These are all theoretical limits.

Can SQL handle 1 million records?

Millions of rows is not a problem, this is what SQL databases are designed to handle, if you have a well designed schema and good indexes.

What is the maximum size of table in SQL Server?

SQL Server does have a wide-table feature that allows a table to have up to 30,000 columns instead of 1024. SQL Server 2008 and up have a Sparse Columns feature that can optimize storage when you have many columns with NULL values in the rows of the table. You are still limited to 8060 bytes of data per row.


2 Answers

These are some of the Maximum Capacity Specifications for SQL Server 2008 R2

  • Database size: 524,272 terabytes
  • Databases per instance of SQL Server: 32,767
  • Filegroups per database: 32,767
  • Files per database: 32,767
  • File size (data): 16 terabytes
  • File size (log): 2 terabytes
  • Rows per table: Limited by available storage
  • Tables per database: Limited by number of objects in a database
like image 141
Malak Gerges Avatar answered Sep 28 '22 17:09

Malak Gerges


I have a three column table with just over 6 Billion rows in SQL Server 2008 R2.

We query it every day to create minute-by-minute system analysis charts for our customers. I have not noticed any database performance hits (though the fact that it grows ~1 GB every day does make managing backups a bit more involved than I would like).

Update July 2016

Row count

We made it to ~24.5 billion rows before backups became large enough for us to decide to truncate records older than two years (~700 GB stored in multiple backups, including on expensive tapes). It's worth noting that performance was not a significant motivator in this decision (i.e., it was still working great).

For anyone who finds themselves trying to delete 20 billion rows from SQL Server, I highly recommend this article. Relevant code in case the link dies (read the article for a full explanation):

ALTER DATABASE DeleteRecord SET RECOVERY SIMPLE; GO  BEGIN TRY     BEGIN TRANSACTION         -- Bulk logged          SELECT  *         INTO    dbo.bigtable_intermediate         FROM    dbo.bigtable         WHERE   Id % 2 = 0;          -- minimal logged because DDL-Operation          TRUNCATE TABLE dbo.bigtable;            -- Bulk logged because target table is exclusivly locked!          SET IDENTITY_INSERT dbo.bigTable ON;         INSERT INTO dbo.bigtable WITH (TABLOCK) (Id, c1, c2, c3)         SELECT Id, c1, c2, c3 FROM dbo.bigtable_intermediate ORDER BY Id;         SET IDENTITY_INSERT dbo.bigtable OFF;     COMMIT END TRY BEGIN CATCH     IF @@TRANCOUNT > 0         ROLLBACK END CATCH  ALTER DATABASE DeleteRecord SET RECOVERY FULL; GO 

Update November 2016

If you plan on storing this much data in a single table: don't. I highly recommend you consider table partitioning (either manually or with the built-in features if you're running Enterprise edition). This makes dropping old data as easy as truncating a table once a (week/month/etc.). If you don't have Enterprise (which we don't), you can simply write a script which runs once a month, drops tables older than 2 years, creates next month's table, and regenerates a dynamic view that joins all of the partition tables together for easy querying. Obviously "once a month" and "older than 2 years" should be defined by you based on what makes sense for your use-case. Deleting directly from a table with tens of billions of rows of data will a) take a HUGE amount of time and b) fill up the transaction log hundreds or thousands of times over.

like image 26
Dan Bechard Avatar answered Sep 28 '22 17:09

Dan Bechard