Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Looking to quantify the performance overhead of NewRelic monitoring in python django app

I am working on a large Django (v1.5.1) application that includes multiple application servers, MySQL servers etc. Before rolling out NewRelic onto all of the servers I want to have an idea of what kind of overhead I will incur per transaction.

If possible I'd like to even distinguish between the application tracking and the server monitoring that would be ideal.

Does anyone know of generally accepted numbers for this? Perhaps a site that is doing this sort of investigation or steps so that we can do the investigation on our own.

like image 500
redfive Avatar asked Oct 01 '13 22:10

redfive


1 Answers

For the Python agent and monitoring of a Django web application, the overhead per request is driven by how many functions are executed within a specific request that are instrumented. This is because full profiling is not being done. Instead only specific functions of interest are instrumented. It is therefore only the overhead of having a wrapper being executed for that one function call, not nested calls, unless those nested functions were in turn ones which were being instrumented.

Specific functions which are instrumented in Django are the middleware and view handler function, plus template rendering and the function within the template renderer which deals with each template block. Distinct from Django itself, you have instrumentation on the low level database client module functions for executing a query, plus memcache and web externals etc.

What this means is that if for a specific web request execution only passed through 100 instrumented functions, then it is only the execution of those which incur an extra overhead. If instead your view handler performed a large number of distinct database queries, or you have a very complicated template being rendered, the number of instrumented functions could be a lot more and as such the overhead for that web request will be more. That said, if your view handler is doing more work, then it already would generally have a longer response time than a less complex one.

In other words, the per request overhead is not fixed and depends on how much work is being done, or more specifically how many instrumented functions are invoked. It is not therefore possible to quantify things and give you a fixed per request figure for the overhead.

That all said, there will be some overhead and the general target range being aimed at is around 5%.

What generally happens though is that the insight which is gained from having the performance metrics means that for most customers there are usually some quite easy improvements that can be found almost immediately. Having made such changes, response times can quite quickly be brought down to be below what they were before you started monitoring, so you end up being ahead of where you were to start with when you had no monitoring. With further digging and tuning, improvements can be even much more dramatic. Pay attention to certain aspect of the performance metrics being provided and you can also better tune your WSGI server and perhaps better utilise it and reduce the number of hosts required and so reduce your hosting costs.

like image 144
Graham Dumpleton Avatar answered Sep 21 '22 03:09

Graham Dumpleton