I'm trying to optimize some code using hotshot, which I expected to be great since it's the high-performance profiler and all. But for some reason, I'm getting wildly inaccurate results from it. On my most recent profiling run, hotshot reported the cumulative time of the top-level function I called as 7.946 seconds. But without even timing it I can tell it's taking much longer than this. Timing the run time myself simply using time.time() gives me a runtime of 42.465 seconds. I'm guessing this is because hotshot doesn't count system IO time or something and my program is batch-processing a lot of binary files?
However, using some more time.time() blocks I narrowed most of the extra time usage (which hotshot fails to notice) down to a lower-level processing function that doesn't do any IO. hotshot reported the total and cumulative times that this function took as 4.414 and 6.185 seconds, respectively. However, again using time.time() statements I found that its cumulative time was over 30 seconds. The cumulative time spent in one relatively simply block of code was 7.32 seconds, longer than hotshot said was spent in the entire function. The block looks like this:
format = element[1]
if isinstance(format, tuple):
format, operator, operand = format
else:
operator, operand = (None, None)
That's it. This block was run over 9 million times, so maybe I shouldn't be surprised that this much time was spent in it. But there is plainly no IO being done here. Why is hotshot so underestimating the time this function (and the whole program) is taking?
And before you ask, no, I'm not using multiple threads or anything like that.
If you are using Python 2.5+, try switching to cProfile
. It appears that hotshot
is no longer supported.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With