Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tips for improving performance of DB that is above size 40 GB (Sql Server 2005) and growing monthly by around 3GB

The current DB or our project has crossed over 40 GB this month and on an average it is growing monthly by around 3 GB. Now all the tables are best normalized and proper indexing has been used. But still as the size is growing it is taking more time to fire even basic queries like 'select count(1) from table'. So can u share some more points that will help in this front. Database is Sql Server 2005. Further if we implement Partitioning wouldn't it create a overhead ?

Thanks in advance.

like image 688
HotTester Avatar asked Jan 29 '10 10:01

HotTester


4 Answers

  1. make sure you have suitable/appropriate indexes
  2. make sure you have a good index maintenance strategy (e.g. rebuild/defrag/keep statistics up to date to ensure indexes stay performing well)
  3. identify poorly performing queries and optimise them (may have been written/tested against small data volumes when performance issues would not have shown up)
  4. consider partitioning your data (e.g. SQL 2005 and onwards has built in support for partitioning if you have Enterprise Edition). Edit: to elaborate on SQL Server partitioning, I full recommend a read through this MSDN article on the whys and the hows. On a general note, there was also a good talk at QCon 2008 by Randy Shoup (eBay architect) on scalability, of which one of the key points on scaling a system in general is to partition. It's summarised here.
  5. is your db server hardware sufficient? could it benefit from more memory? Edit: looking at your comment with your hardware info, I think you could do with (at least) throwing more RAM in it
  6. you may benefit from some denormalisation. Difficult to be specific without knowing exact db structure, but denormalising may improve certain queries at the expense of data duplication/disk space
like image 153
AdaTheDev Avatar answered Nov 16 '22 00:11

AdaTheDev


A 40 GB database is by no means considered a big database these days. And a 3 GB growth per month is also nothing unusual.

However, in the areas you really have to be careful about some small things that you might get away with in smaller databases. Since you write about issuing a "SELECT COUNT(1) ..." query, you might want to think about the need for such queries. Sounds like this is a "displaying number of rows in the table" type of feature. Do you really need these kind of what you call "basic queries" or can you do without? Considering especially this query: do you need the result to be accurate or could it also be a "good estimate"? If so, you might want to throw in a WITH (NOLOCK) hint here and there, where accuracy is not mandatory. However, use NOLOCK wisely as it will return wrong data at a incredible speed. :-)

Plenty of good suggestions have been mentioned by AdaTheDev, just let add me one point:

Nothing gives you better performance than a sound and solid schema. And, who knows, what may has been considered appropriate at the time when you designed the schema, may need to be revised now after being in production for some time. This is especially true for indices.

like image 21
Frank Kalis Avatar answered Nov 16 '22 01:11

Frank Kalis


Your machine is quite low spec, however as you've not even mentioned what disk you're using, that is most likely the problem. You will need very fast disk to support a 40GB database with 4GB of RAM, multiple striped drives would be a bare minimum.

like image 26
cjk Avatar answered Nov 16 '22 00:11

cjk


I would start by using Performance Monitor and SQL Server Profiler to find out which is the most critical performance limits on your system. After that you probably have a good idea where to start.

Here is one place to start: Troubleshooting Performance Problems in SQL Server 2005

like image 42
Joakim Backman Avatar answered Nov 16 '22 00:11

Joakim Backman