Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Simple select count(id) uses 100% of Azure SQL DTUs

This started off as this question but now seems more appropriately asked specifically since I realised it is a DTU related question.

Basically, running:

select count(id) from mytable

EDIT: Adding a where clause does not seem to help.

Is taking between 8 and 30 minutes to run (whereas the same query on a local copy of SQL Server takes about 4 seconds).

Below is a screen shot of the MONITOR tab in the Azure portal when I run this query. Note I did this after not touching the Database for about a week and Azure reporting I had only used 1% of my DTUs.

enter image description here

A couple of extra things:

  • In this particular test, the query took 08:27s to run.
  • While it was running, the above chart actually showed the DTU line at 100% for a period.
  • The database is configured Standard Service Tier with S1 performance level.
  • The database is about 3.3GB and this is the largest table (the count is returning approx 2,000,000).

I appreciate it might just be my limited understanding but if somebody could clarify if this is really the expected behaviour (i.e. a simple count taking so long to run and maxing out my DTUs) it would be much appreciated.

like image 987
chrisb Avatar asked Sep 27 '14 09:09

chrisb


People also ask

What is DTU percentage in Azure?

The DTU% is the percentage of units your particular database is using and it seems that this number can go over 100% of the DTU rating of the service tier (I assume to the limit of the server). This percentage number is designed to help you choose the appropriate service tier.

How many DTUs are equal to a vCore?

To choose the service objective, or compute size, for the migrated database in the vCore model, you can use a simple but approximate rule of thumb: every 100 DTUs in the Basic or Standard tiers require at least 1 vCore, and every 125 DTUs in the Premium tier require at least 1 vCore.

How do I check CPU usage on an Azure SQL Database?

You can measure and analyze CPU utilization using the Azure portal, Query Store interactive tools in SSMS, and Transact-SQL queries in SSMS and Azure Data Studio. The Azure portal and Query Store show execution statistics, such as CPU metrics, for completed queries.


1 Answers

From the query stats in your previous question we can see:

300ms CPU time
8000 physical reads

8:30 is about 500sec. We certainly are not CPU bound. 300ms CPU over 500sec is almost no utilization. We get 16 physical reads per second. That is far below what any physical disk can deliver. Also, the table is not fully cached as evidenced by the presence of physical IO.

I'd say you are throttled. S1 corresponds to

934 transactions per minute

for some definition of transaction. Thats about 15 trans/sec. Maybe you are hitting a limit of one physical IO per transaction?! 15 and 16 are suspiciously similar numbers.

Test this theory by upgrading the instance to a higher scale factor. You might find that SQL Azure Database cannot deliver the performance you want at an acceptable price.

You also should find that repeatedly scanning half of the table results in a fast query because the allotted buffer pool seems to fit most of the table (just not all of it).

like image 165
usr Avatar answered Sep 24 '22 10:09

usr