I have designed databases several times in my company. To increase the performance of the database, I look for Normalisation and Indexing only.
If you were asked to increase the performance of a database which has approx 250 tables and some tables with millions of records, what different things you would look for?
Thanks in advance.
My roll at MySpace was "Performance Enhancement DBA/Developer". I would say that Normalization and Indexes are a requirement in high performance databases, but you must really analyze your table structures and indexes to truly unlock the powers of database design.
Here are a few suggestions I would have for you;
Get to know the DB Engine. A through knowledge of the underlining I/O structure goes a very long way in designing a proper index or table. Using PerfMon and Profiler, alongside your knowledge of what Read/Write I/Os are, you can put some very specific numbers behind your theory of what is a well-formed table / index solution.
Understand the difference between Clustered and Non-Clustered indexes and when to use which.
Use sys.dm_os_waiting_tasks and the sys.dm_os_wait_stats DMVs. They will tell you where you should put your effort into reducing wait-time.
Use DBCC SET STATISTICS IO/TIME ON, and evaluate your execution plans to see if one query reduces or increases the number of page reads or duration.
DBCC SHOWCONTIG will tell you if your tables are heavily fragmented. This is often neglected by developers and Jr. DBAs from a performance point of view - however, this can have a very BIG effect on the number of page-reads you have. If a table has 20% extent page density, that means you're reading about 5 times the data you otherwise would be if the table and it's indexes were defragmented.
Evaluate dirty-reads ( nolock, read uncommited ). If you could do away with millisecond-precision on reads, save the locks!
Consider taking out unnecessary Foreign Keys. They're useful in Dev environments, not on high-performance transactional systems.
Partitions in large tables make a big difference - only if properly designed.
Application changes - If you could schedule batch updates for asynchronous transactions, put them into an index-free heap and process it on schedule so that you don't constently update the tables which you query heavily.
Always Always Always!!! use the same data type variable to query the target columns; For example, the following statement uses a bigint variable for a smallint column:
declare @i bigint set @i = 0
select * from MyTable where Col01SmallInt >= @i
In the process of evaluating index / table pages, the query engine may opt to convert your smallint column data to bigint data type. Consider instead, changing your varialbe type, or at-least converting it to smallint in your search condition.
That's all I can think of off the top of my head. If you run into a more specific problem, I would have a more specific answer for you..
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With