I am running a load test against my Azure Web App P3 pricing tier. We have the following auto scale out strategy -
Min Instance 5 and Max instance 20, Increase by 1 Instance if CPU Percentage (Max) goes more than 85%, Decrease by 1 Instance if CPU Percentage Average goes below than 50%
Right now, it is running with 5 instances. If I go to Applications Insights, 'Live Metrics Stream' pane of all available instances, it shows that CPU usage is around 75% (average) in all the 5 instances. In fact, some of the instances are nearing 85%.
Whereas, if I turn to CPU usage chart on the App Service Plan level (I have only one app running under the plan), it shows only 20%.
How do we reconcile these two conflicting stats?
What is shown in 'Live Metrics Stream' is the CPU usage of w3wp process, whereas what is shown in App Service Plan level is the total Machine CPU usage. The former is not normalized for take into account the number of logical processors - so you need to divide it by number of cores to get the normalized percentage. Even after this, 'Live Metrics Stream' metric can be lower than AppServicePlan metric as the former only shows w3wp usage and the latter shows total machine cpu usage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With