I have profiled my python code using python's cProfile
module and got the following results:
ncalls tottime percall cumtime percall filename:lineno(function)
13937860 96351.331 0.007 96351.331 0.007 {method 'poll' of 'select.poll' objects}
13930480 201.012 0.000 201.012 0.000 {built-in method posix.read}
13937860 180.207 0.000 97129.916 0.007 connection.py:897(wait)
13937860 118.066 0.000 96493.283 0.007 selectors.py:356(select)
6968925 86.243 0.000 97360.129 0.014 queues.py:91(get)
13937860 76.067 0.000 194.402 0.000 selectors.py:224(register)
13937860 64.667 0.000 97194.582 0.007 connection.py:413(_poll)
13930480 64.365 0.000 279.040 0.000 connection.py:374(_recv)
31163538/17167548 64.083 0.000 106.596 0.000 records.py:230(__getattribute__)
13937860 57.454 0.000 264.845 0.000 selectors.py:341(register)
...
Obviously, my program spends most of its running time in the method 'poll' of 'select.poll' objects
. However, I have no clue when and why this method is called and what I have to change in my program in order to reduce these method calls.
So, what could I look for to avoid this bottleneck in my code?
I am using 64bit python 3.5 with numpy and sharedmem on a Linux server.
Methods that execute inside a different process (for example with a ProcessPoolExecutor
) are not captured by cProfile
. So the select.poll
is just showing your main process waiting for results from the other processes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With