Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Removing a Primary Key (Clustered Index) to increase Insert performance

We've been experiencing SQL timeouts and have identified that bottleneck to be an audit table - all tables in our system contain insert, update and delete triggers which cause a new audit record.

This means that the audit table is the largest and busiest table in the system. Yet data only goes in, and never comes out (under this system) so no select performance is required.

Running a select top 10 returns recently insert records rather than the 'first' records. order by works, of course, but I would expect that a select top should return rows based on their order on the disc - which I'd expect would return the lowest PK values.

It's been suggested that we drop the clustered index, and in fact the primary key (unique constraint) as well. As I mentioned earlier there's no need to select from this table within this system.

What sort of performance hit does a clustered index create on a table? What are the (non-select) ramifications of having an unindexed, unclustered, key-less table? Any other suggestions?

edit

our auditing involves CLR functions and I am now benchmarking with & without PK, indexes, FKs etc to determine the relative cost of the CLR functions & the contraints.

After investigation, the poor performance was not related to the insert statements but instead the CLR function which orchestrated the auditing. After removing the CLR and instead using a straight TSQL proc, performance improved 20-fold.

During the testing I've also determined that the clustered index and identity columns make little or no difference to the insert time, at least relative to any other processing that takes place.

// updating 10k rows in a table with trigger

// using CLR function
PK (identity, clustered)- ~78000ms
No PK, no index - ~81000ms

// using straight TSQL
PK (identity, clustered) - 2174ms
No PK, no index - 2102ms
like image 373
Kirk Broadhurst Avatar asked Aug 31 '11 23:08

Kirk Broadhurst


People also ask

How can you improve the performance of an insert query?

To optimize insert speed, combine many small operations into a single large operation. Ideally, you make a single connection, send the data for many new rows at once, and delay all index updates and consistency checking until the very end.

Does primary key affect insert performance?

In MS SQL Server, it is well documented that the type of the primary key can have a dramatic effect on performance. Using narrower types and sequential values significantly improves insert performance when using SQL Server, which is why using integers instead of guids for surrogate keys tends to make sense.

Will index affect insert performance?

INSERTs tend to perform fastest on a table without any indexes. This is because neither re-ordering nor index updating are required. On the same table, executing UPDATEs and DELETEs is the most expensive. The reason is that the database requires most time to find the specific records within the table.

Can we remove clustered index from primary key?

You can delete (drop) a primary key in SQL Server by using SQL Server Management Studio or Transact-SQL. When the primary key is deleted, the corresponding index is deleted. This may be the clustered index of the table, causing the table to become a heap.


2 Answers

According to Kimberly Tripp - the Queen of Indexing - having a clustered index on a table actually helps INSERT performance:

The Clustered Index Debate Continued

  • Inserts are faster in a clustered table (but only in the "right" clustered table) than compared to a heap. The primary problem here is that lookups in the IAM/PFS to determine the insert location in a heap are slower than in a clustered table (where insert location is known, defined by the clustered key). Inserts are faster when inserted into a table where order is defined (CL) and where that order is ever-increasing.

Source: blog post called The Clustered Index Debate Continues....

like image 180
marc_s Avatar answered Nov 18 '22 06:11

marc_s


A great test script and description of this scenarion is available on Tibor Karaszi's blog at SQLblog.com

My numbers don't entirely match his - I see more difference on a batch statement than I do with per-row statements.

With the row count around one million I fairly consistently get a single-row insert loop on clustered index to perform slightly faster than on a non-indexed (clustered taking approximately 97% as long as non-indexed).

Conversely the batch insert (10000 rows) is faster into a non-indexed rather than clustered index (anything from 75%-85% of the clustered insert time).

clustered - loop        - 1689
heap      - loop        - 1713
clustered - one statement - 85
heap      - one statement - 62

He describes what's happening on each insert:

Heap: SQL Server need to find where the row should go. For this it uses one or more IAM pages for the heap, and it cross references these to one or more PFS pages for the database file(s). IMO, there should be potential for a noticable overhead here. And even more, with many users hammering the same table I can imagine blocking (waits) against the PFS and possibly also IAM pages.

Clustered table: Now, this is dead simple. SQL server navigates the clustered index tree and find where the row should go. Since this is an ever increasing index key, each row will go to the end of the table (linked list).

like image 40
Kirk Broadhurst Avatar answered Nov 18 '22 07:11

Kirk Broadhurst