I have a table with 10 million records in it. Is that considered a lot of records? Should I be worried about search times? If not, it will keep growing, so what is considered a large table? How much does table size factor into search time, and what can I do to ameliorate those issues, preferably before they become issues?
"Large" is like "smart" - it's relative. 10 million rows is a good size, but whether the table is large depends on a number of factors: how many columns and what are their data types?
The number of tables in a database is limited only by the number of objects allowed in a database (2,147,483,647). A standard user-defined table can have up to 1,024 columns. The number of rows in the table is limited only by the storage capacity of the server.
A very large database, (originally written very large data base) or VLDB, is a database that contains a very large amount of data, so much that it can require specialized architectural, management, processing and maintenance methodologies.
"Large" is like "smart" - it's relative. 10 million rows is a good size, but whether the table is large depends on a number of factors:
sys.dm_db_partition_stats
)?Search times are not necessarily driven by size per se, but rather the effectiveness of your indexing strategy and the types of queries you're running for searches. If you have things like:
WHERE description LIKE '%foo%'
Then a normal index is not going to help you whatsoever, and you should start to get worried. You might consider Full-Text Search for cases like this.
10 million rows in a table with a single INT column (e.g. a Numbers table) is nothing. 10 million rows of Products with long descriptions, XML, Geography data, images etc. is quite another.
There is a reason that the max capacity specifications for SQL Server do not document an upper bound for number of rows in a table.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With