Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How many records can i store in a Sql server table before it's getting ugly?

i've been asked to do some performance tests for a new system. It is only just running with a few client, but as they expect to grow, these are the numbers i work with for my test:

200 clients, 4 years of data, and the data changes per.... 5 minutes. So for every 5 minutes for every client there is 1 record. That means 365*24*12 = 105.000 records per client per year, that means 80 milion records for my test. It has one FK to another table, one PK (uniqueidentifier) and one index on the clientID.

Is this something SqlServer laughs about because it isn't scaring him, is this getting too much for one quad core 8 GB machine, is this on the edge, or.....

Has anybody had any experience with these kind of numbers?

like image 736
Michel Avatar asked May 07 '10 11:05

Michel


People also ask

How many records can be stored in SQL Server table?

The sum of the number of all objects in a database can't exceed 2,147,483,647.

How many columns in SQL table is too many?

MySQL has hard limit of 4096 columns per table, but the effective maximum may be less for a given table. The exact column limit depends on several factors: The maximum row size for a table constrains the number (and possibly size) of columns because the total length of all columns cannot exceed this size.

Does SQL Server have a limit?

SQL Server Standard Edition has an upper limit of 524 Petabytes, but it is not free. If your database reaches the limit of your SQL Server Express version, you will begin to experience errors due to the inability of the database tables to accept new data.

How many tables is too many in a SQL database?

The number of tables is limited only by the number of database objects, currently 2, 147, 483, 647. A couple of hundred tables isn't going to make a difference to anything except the clarity of your data model.


1 Answers

Field PK should be as small as possible and not be random - GUID sucks here. The main problems are:

  • The PK is used in all foreign keys to reference the row, so a large PK uses more space ?= more IO.
  • A random PK means inserts happen all over the place = many page splits = inefficient index usage.

How bad is that? I know in some scenarios you lose 80% speed there.

Otherwise - no problem. I have a table in excess to 800 million rows, and things are super fast there ;) Naturally you need to have decent queries, decent indices and obviously that does not run on a single 5400 RPM green hard disc to be efficient - but given proper IO and not stupid queries and some decent indices, SQL does not balk on a couple of billion rows.

So, while "it depends", the generic answer is that large tables are not a problem... ...unless you do MASS deletes. Deleting half the table will be a HUGE transaction, which is why partitioning is nice for stuff like accounting - one partition table per year means I can get rid of a year data without a DELETE statement ;)

like image 90
TomTom Avatar answered Sep 22 '22 23:09

TomTom