I'm running an asyncio application which needs more than one event loop to service a large number of IO operations (upward of a thousand simultaneously). The event loops run in a separate thread and loops forever on coroutines as they are submitted.
I'm looking for a way to identify when an existing event loop is near full capacity so I can fire up a new event loop thread on demand, rather than pre-specifying how many event loops I want to run.
Near capacity would mean the event loop is keeping busy say 80%+ of the time. If an event loop is spending less than 20% of its time in a wait state it's time to add another event loop thread.
It doesn't seem like this is easy per-thread: Profile Python CPU Usage By Thread
I'm looking for a way to identify when an existing event loop is near full capacity so I can fire up a new event loop thread
I don't think this approach can work because of the GIL. The use case you seem to be describing is that of event loops stalling due to CPU overload. If that is the case, adding more threads won't help simply because CPU work is, except for rare exceptions, not parallelized in Python.
If your event loops are doing too much CPU-related work (e.g. calculations), you should move those individual units of work to separate threads using run_in_executor
. If that is not enough, you can try switching to uvloop, a high-performance asyncio drop-in replacement for CPython. You can also try asyncio with PyPy.
If none of those options work, the next thing to try is some variant of multiprocessing. (Or a more low-level/performance-oriented language.)
If you want to utilize more available machine resources it's easier to achieve delegating this job to an outer supervisor that manages multiple python processes.
And spawning more processes on capacity limit sounds like something load balancer should do.
Delegating this job(s) to time-proved solutions seems to be better choice than to write your own on Python (de-facto). I'm also skeptical towards idea of mixing application business logic with deployment related details that may change depending on concrete server infrastructure.
aiohttp
has a nice manual of basic deployment process.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With