There is a "best practice" that you have to run
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
Before doing performance analysis on a SQL query.
Yet, for example, the later one DROPCLEANBUFFERS:
Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache. This forces all dirty pages for the current database to be written to disk and cleans the buffers. After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool.
I guess, this means that you will test your query as if it was the first query that has run in the server, thus the actual "real-life" impact of the query will be lower.
Is it really advisable to run the three commands to know the query cost or does it get you to a rather empirical results that have no close relation to actual query time in live environment?
Table size: If your query hits one or more tables with millions of rows or more, it could affect performance. Joins: If your query joins two tables in a way that substantially increases the row count of the result set, your query is likely to be slow. There's an example of this in the subqueries lesson.
Drop clean buffer operation removes all the buffers from the cache which contain the data already moved to the disk. In other words, this operation flushes out all the clean pages (which were dirty before CHECKPOINT executed) out of the memory.
I disagree it is best practice and very rarely use it.
A query that I tune should be a popular, often run one. This gives me most bang for my buck. It should rarely be run "cold" for either plan or data.
I'm testing the query execution: not the disk read system or the Query Optimiser compilation
This was asked on DBA.SE a while ago. See these please
Is it really advisable to run the three commands to know the query cost or does it get you to a rather empirical results that have no close relation to actual query time in live environment?
It depends.
If you don't run DBCC DROPCLEANBUFFERS
then there is a chance that you will end up with some odd results unless you are very careful about the way that you do your performance analysis. For example, generally speaking the second time you run a query it's going to be quicker because the required pages are probably cached in memory - running DBCC DROPCLEANBUFFERS
helps here because it ensures that you have a consistent starting point in your testing and it ensures that your query is not artificially running quickly just because it is skipping the expensive disk access portions of your query.
Like you say however, in live environments it could be that this data is always cached and so your test is not representative of production conditions - it depends on whether or not you are analysing the performance based on the assumption that the data is frequently accessed and so will generally be cached, or infrequently accessed and so disk asscess is likely to be involved.
The short answer is that running those 3 statements can help ensure that you get consistent results while performance testing, however you shouldn't necessarily always run these before testing, instead you should try to understand what each one does and what impact it will have on your query when compared to a production environment.
As an aside, Never run any of those 3 statements on a production server unless you know exactly what you are doing!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With