Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL Server 2012 CPU usage spikes to 100%

I am using SQL Server 2014 in AWS and the AWS image configuration is 4 cores and 16GB RAM(m3.xlarge). I am running the following simple query,

SELECT * FROM user_table WHERE user_id = '10'

user_table contains 1000k records, user_id is primary key. When the above simple query is executed from my application through EJB hibernate, the CPU spikes to 10% for a moment and again it backs to normal.

So my use case is, 100 users will concurrently try to hit the application, so in a fraction of second 100 times the above query will try to execute in a fraction of second. So the CPU usage spikes to 100%. Once all the query execution is completed the CPU usage is back to normal at 1%.

  • Why it so? whether I need to increase my AWS instance type?
  • What should I have to do in-order to make SQL server to handle 100 or more concurrent hits without making high CPU usage? If my query is so complex then there might have a chance to get spike but my query is simple and straight forward.
  • Is there is any bench mark metrics available for SQL server 2014?
  • Any solution to make support for concurrent hits with SQL server by low CPU usage consumption?

Edit 1:

  • One more information my data file size is around 32.2GB and log file size is around 894mb for my database.

  • My DB has the isolation level of READ_COMMITTED_SNAPSHOT is set to ON. But when I tried by setting READ_COMMITTED_SNAPSHOT to OFF, there is a difference of 20% performance improvement but not that much considerable performance improvements.

like image 646
Jaya Ananthram Avatar asked May 09 '15 07:05

Jaya Ananthram


2 Answers

I would create a clustered index on the key, since everything is stored on a Heap until you define one. Which could result in the High CPU usage for searching (even if it is in memory)

Link to MSDN Article

Specifically

If a table is a heap and does not have any nonclustered indexes, then the entire table must be examined (a table scan) to find any row. This can be acceptable when the table is tiny, such as a list of the 12 regional offices of a company.

Caveat :

Warning Creating or dropping clustered indexes requires rewriting the entire table. If the table has nonclustered indexes, all the nonclustered indexes must all be recreated whenever the clustered index is changed. Therefore, changing from a heap to a clustered index structure or back can take a lot of time and require disk space for reordering data in tempdb.

This SQL should do the trick (Once you've got a nice backed up somewhere)

CREATE CLUSTERED INDEX IDX_UserID on user_table(User_ID)

Normal index should work fine as well, but you should always have a clustered index to sort the data sanely, and then any other high usage indexes.

like image 112
TonyM Avatar answered Sep 24 '22 14:09

TonyM


It is hard (read impossible) to tell for sure with so little data, but for me this sounds perfect: 100%CPU means sql-server is not limited at all by IO but uses only CPU to perform the query, so it probably finds everything it needs in memory, and it also is able to utilize all CPUs so no bottleneck there either.

So as long as performance is sufficient, there is no need to worry. Of course things might get more interesting once more queries hit the system. One thing I would expect is to things go out of the database cache and therefore CPU load dropping, while IO increasing and performance dropping.

like image 24
Jens Schauder Avatar answered Sep 22 '22 14:09

Jens Schauder