When I run this little main from rfxcom python library :
from asyncio import get_event_loop
from rfxcom.transport import AsyncioTransport
dev_name = '/dev/serial/by-id/usb-RFXCOM_RFXtrx433_A1XZI13O-if00-port0'
loop = get_event_loop()
def handler(packet):
print(packet.data)
try:
rfxcom = AsyncioTransport(dev_name, loop, callback=handler)
loop.run_forever()
finally:
loop.close()
I see that CPU usage is getting very hight (around 100%). I can't understand why : there is very few messages (~ 1 message every 5 seconds) received by the module, and I thought that when epoll_wait was called then the CPU should be idled, waiting for the next event.
I launched the main with python cProfile, and it shows this :
In [4]: s.sort_stats('time', 'module').print_stats(50)
Mon Jul 20 22:20:55 2015 rfxcom_profile.log
263629453 function calls (263628703 primitive calls) in 145.437 seconds
Ordered by: internal time, file name
List reduced from 857 to 50 due to restriction <50>
ncalls tottime percall cumtime percall filename:lineno(function)
13178675 37.280 0.000 141.337 0.000 /usr/local/lib/python3.4/asyncio/base_events.py:1076(_run_once)
13178675 31.114 0.000 53.230 0.000 /usr/local/lib/python3.4/selectors.py:415(select)
13178674 15.115 0.000 32.253 0.000 /usr/local/lib/python3.4/asyncio/selector_events.py:479(_process_events)
13178675 12.582 0.000 12.582 0.000 {method 'poll' of 'select.epoll' objects}
13178699 11.462 0.000 17.138 0.000 /usr/local/lib/python3.4/asyncio/base_events.py:1058(_add_callback)
13178732 6.397 0.000 11.397 0.000 /usr/local/lib/python3.4/asyncio/events.py:118(_run)
26359349 4.872 0.000 4.872 0.000 {built-in method isinstance}
1 4.029 4.029 145.365 145.365 /usr/local/lib/python3.4/asyncio/base_events.py:267(run_forever)
13178669 4.010 0.000 4.913 0.000 /home/bruno/src/DomoPyc/venv/lib/python3.4/site-packages/rfxcom-0.3.0-py3.4.egg/rfxcom/transport/asyncio.py:85(_writer)
So the first three function calls in term of elapsed time are python3.4/asyncio/base_events.py
, python3.4/selectors.py
and python3.4/asyncio/selector_events.py
.
EDIT : the time command on a similar run gives :
time python -m cProfile -o rfxcom_profile.log rfxcom_profile.py
real 2m24.548s
user 2m19.892s
sys 0m4.113s
Can someone explain me why ?
EDIT2: As the the number of fonctions calls is very high, I made a strace of the process and found that there is a loop on epoll_wait with a timeout value of 2ms :
// many lines like this :
epoll_wait(4, {{EPOLLOUT, {u32=7, u64=537553536922157063}}}, 2, -1) = 1
I saw in base_event._run_once that the timeout is calculated, but I can't figure it out. I don't know how to set this timeout higher to lower the CPU.
If someone has a clue...
Thank you for your answers.
I answer my question as it may be usefull for others.
After having set the environment variable PYTHONASYNCIODEBUG
to 1, there was lines and lines like :
DEBUG:asyncio:poll took 0.006 ms: 1 events
In the Rfxcom library there is a writer mechanism with a queue to push data to the serial device. The intention was I think "asyncio, tell me when I can write, and then I'll flush the write queue". So there was a line like this :
self.loop.call_soon(self.loop.add_writer, self.dev.fd, self._writer)
self.dev
is the Serial
class instance and self.dev.fd
is the serial file descriptor.
As the doc says "add_writer(fd, callback, *args) : Start watching the file descriptor for write availability".
I thought that a serial device was always available for writing so I made a little script:
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
loop = get_event_loop()
def writer_cb():
logger.info("writer cb called")
s = Serial('/dev/serial/by-id/usb-RFXCOM_RFXtrx433_A1XZI13O-if00-port0', 38400, timeout=1)
loop.add_writer(s.fd, writer_cb)
loop.run_forever()
And saw the lines looping endlessly, bringing CPU to 100%:
DEBUG:asyncio:poll took 0.006 ms: 1 events
INFO:root:writer cb called
So I think there is no need to put a writer callback, but simply call write
on the serial device.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With