I created a trial account on Azure, and I deployed my database from SmarterAsp
.
When I run a pivot query on SmarterAsp\MyDatabase
, the results appeared in 2 seconds.
However, running the same query on Azure\MyDatabase
took 94 seconds.
I use the SQL Server 2014 Management Studio (trial) to connect to the servers and run query.
Is this difference of speed because my account is a trial account?
Some related info to my question
the query is:
ALTER procedure [dbo].[Pivot_Per_Day] @iyear int, @imonth int, @iddepartment int as declare @columnName Nvarchar(max) = '' declare @sql Nvarchar(max) ='' select @columnName += quotename(iDay) + ',' from ( Select day(idate) as iDay from kpivalues where year(idate)=@iyear and month(idate)=@imonth group by idate )x set @columnName=left(@columnName,len(@columnName)-1) set @sql =' Select * from ( select kpiname, target, ivalues, convert(decimal(18,2),day(idate)) as iDay from kpi inner join kpivalues on kpivalues.idkpi=kpi.idkpi inner join kpitarget on kpitarget.idkpi=kpi.idkpi inner join departmentbscs on departmentbscs.idkpi=kpi.idkpi where iddepartment='+convert(nvarchar(max),@iddepartment)+' group by kpiname,target, ivalues,idate)x pivot ( avg(ivalues) for iDay in (' + @columnName + ') ) p' execute sp_executesql @sql
Running this query on 3 different servers gave me different results in terms of Elapsed time till my pivot table appear on the screen:
Azure - Elapsed time = 100.165 sec
Smarterasp.net - Elapsed time = 2.449 sec
LocalServer - Elapsed time = 1.716 sec
Regarding my trial account on Azure, I made it with the main goal to check if I will have a better speed than Smarter when running stored procedure like the above one. I choose for my database Service Tier - Basic, Performance level -Basic(5DTUs) and Max. Size 2GB.
My database has 16 tables, 1 table has 145284 rows, and the database size is 11mb. Its a test database for my app.
My questions are:
Conclusions based on your inputs:
I tested again my query on P1 and the Elapsed time was 0.5 seconds :)
the same updated query on SmarterASP had Elapsed time 0.8 seconds.
Now its clear for me what are the tiers in Azure and how important is to have a very good query (I even understood what is an Index and his advantage/disadvantage)
Thank you all, Lucian
SQL Server uses nested loop, hash, and merge joins. If a slow-performing query is using one join technique over another, you can try forcing a different join type. For example, if a query is using a hash join, you can force a nested loops join by using the LOOP join hint.
There are three primary options for Automatic Tuning with Azure SQL Database: CREATE INDEX: Creates new indices that can improve the performance. DROP INDEX: Drops redundant and unused indices (>90 days) FORCE LAST GOOD PLAN: Identifies queries using the last known good execution plan.
Here's one way to track down the cause of the problem: Find out the most expensive queries running in SQL Server, over the period of slowdown. Review the query plan and query execution statistics and wait types for the slowest query. Review the Query History over the period where performance changed.
This is first and foremost a question of performance. You are dealing with a poorly performing code on your part and you must identify the bottleneck and address it. I'm talking about the bad 2 seconds performance now. Follow the guidelines at How to analyse SQL Server performance. Once you get this query to execute locally acceptable for a web app (less than 5 ms) then you can ask the question of porting it to Azure SQL DB. Right now your trial account is only highlighting the existing inefficiencies.
... @iddepartment int ... iddepartment='+convert(nvarchar(max),@iddepartment)+' ...
so what is it? is the iddepartment
column an int
or an nvarchar
? And why use (max)
?
Here is what you should do:
@iddepartment
in the inner dynamic SQLnvarchar(max)
conversion. Make the iddepartment
and @iddertment
types matchiddepartment
and all idkpi
sHere is how to parameterize the inner SQL:
set @sql =N' Select * from ( select kpiname, target, ivalues, convert(decimal(18,2),day(idate)) as iDay from kpi inner join kpivalues on kpivalues.idkpi=kpi.idkpi inner join kpitarget on kpitarget.idkpi=kpi.idkpi inner join departmentbscs on departmentbscs.idkpi=kpi.idkpi where iddepartment=@iddepartment group by kpiname,target, ivalues,idate)x pivot ( avg(ivalues) for iDay in (' +@columnName + N') ) p' execute sp_executesql @sql, N'@iddepartment INT', @iddepartment;
The covering indexes is, by far, the most important fix. That obviously requires more info than is here present. Read Designing Indexes including all sub-chapters.
As a more general comment: this sort of queries befit columnstores more than rowstore, although I reckon the data size is, basically, tiny. Azure SQL DB supports updateable clustered columnstore indexes, you can experiment with it in anticipation of serious data size. They do require Enterprise/Development on the local box, true.
(Update: the original question has been changed to also ask how to optimise the query - which is a good question as well. The original question was why the difference which is what this answer is about).
The performance of individual queries is heavily affected by the performance tiers. I know the documentation implies the tiers are about load, that is not strictly true.
I would re-run your test with an S2 database as a starting point and go from there.
Being on a trial subscription does not in itself affect performance, but with the free account you are probably using a B level which isn't really useable by anything real - certainly not for a query that takes 2 seconds to run locally.
Even moving between, say, S1 and S2 will show a noticeable difference in performance of an individual query. If you want to experiment, do remember you are charged a day for "any part of a day", which is probably okay for S level but be careful when testing P level.
For background; when Azure introduced the new tiers last year, they changed the hosting model for SQL. It used to be that many databases would run on a shared sqlserver.exe. In the new model, each database effectively gets its own sqlserver.exe that runs in a resource constrained sandbox. That is how they control the "DTU usage" but also affects general performance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With