Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Choosing optimal indexes for a SQL Server table

I have a SQL Server table with the following structure:

CREATE TABLE [dbo].[Log](
 [LogID] [bigint] IDENTITY(1,1) NOT NULL,
 [A] [int] NOT NULL,
 [B] [int] NOT NULL,
 [C] [int] NOT NULL,
 [D] [int] NOT NULL,
 [E] [int] NOT NULL,
 [Flag1] [bit] NOT NULL,
 [Flag2] [bit] NOT NULL,
 [Flag3] [bit] NOT NULL,
 [Counter] [int] NOT NULL,
 [Start] [datetime] NOT NULL,
 [End] [datetime] NOT NULL)

The table is used to log activities. Columns A-E represent foreign keys, Flag1-Flag3 indicate certain log states, and columns Start and End mark beginning and end of an activity.

On average this table is updated every ~30sec and update makes ~50 inserts/updates.

User can make a query from UI and filter data on any given column and all combinations of columns and column types.

What would be the best way to optimize data retrieval for this table:

  1. Create one "master" index that would hold all these columns
  2. Identify some of the most used filter combinations e.g. [A,D,E], [A, Start, End] etc. and create indexes for them
  3. Something else...
like image 501
Toni Frankola Avatar asked Oct 31 '10 20:10

Toni Frankola


3 Answers

I doubt anyone here can make anything but a guess - you need to record the usage of the table and see from that usage what combinations of columns are being queried for.

  1. Create one "master" index that would hold all these columns

That's definitely not a good idea - if you have an index on (A,B,C,D,E) and you restrict your query by values of B and D, that index is totally useless. It's only useful

  • if you query by all five columns frequently
  • by combinations like (A,B), (A,B,C), (A,B,C,D) frequently

In any other case, it's a waste - don't use this.

  1. Identify some of the most used filter combinations e.g. [A,D,E], [A, Start, End] etc. and create indexes for them

Yes, that's really the only way that promises any success. You need to see what kind of queries actually happen, and then tweak for those.

like image 93
marc_s Avatar answered Oct 28 '22 06:10

marc_s


Log tables are rarely indexed, because indexing slows down INSERT, UPDATE, and DELETE statements.

I would recommend either:

  • loading the records into a table (temporary or actual, indexed) before filtering
  • using an indexed view

Basically - if speed/performance is a big concern, index the records in another form of table so the logging isn't impacted.

like image 44
OMG Ponies Avatar answered Oct 28 '22 05:10

OMG Ponies


One approach is to let SQL Server tell you the optimal usage. Run a trace for a few min while the table is under "typical" usage, and then run the Database Engine Tuning Advisor

like image 22
Scott Weinstein Avatar answered Oct 28 '22 06:10

Scott Weinstein