Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can a slow network cause a Python app to use *more* CPU?

Let's say we have a system like this:

                                                                     ______
                              { application instances ---network--- (______)
                             {  application instances ---network--- |      |
requests ---> load balancer {   application instances ---network--- | data |
                             {  application instances ---network--- | base |
                              { application instances ---network--- \______/

A request comes in, a load balancer sends it to an application server instance, and the app server instances talk to a database (elsewhere on the LAN). The application instances can either be separate processes or separate threads. Just to cover all the bases, let's say there are several identical processes, each with a pool of identical application service threads.

If the database is performing slowly, or the network gets bogged down, clearly the throughput of request servicing is going to get worse.

Now, in all my pre-Python experience, this would be accompanied by a corresponding drop in CPU usage by the application instances -- they'd be spending more time blocking on I/O and less time doing CPU-intensive things.

However, I'm being told that with Python, this is not the case -- under certain Python circumstances, this situation can cause Python's CPU usage to go up, perhaps all the way to 100%. Something about the Global Interpreter Lock and the multiple threads supposedly causes Python to spend all its time switching between threads, checking to see if any of them have an answer yet from the database. "Hence the rise in single-process event-driven libraries of late."

Is that correct? Do Python application service threads actually use more CPU when their I/O latency increases?

like image 635
mike Avatar asked Jan 24 '23 11:01

mike


2 Answers

In theory, no, in practice, its possible; it depends on what you're doing.

There's a full hour-long video and pdf about it, but essentially it boils down to some unforeseen consequences of the GIL with CPU vs IO bound threads with multicores. Basically, a thread waiting on IO needs to wake up, so Python begins "pre-empting" other threads every Python "tick" (instead of every 100 ticks). The IO thread then has trouble taking the GIL from the CPU thread, causing the cycle to repeat.

Thats grossly oversimplified, but thats the gist of it. The video and slides has more information. It manifests itself and a larger problem on multi-core machines. It could also occur if the process received signals from the os (since that triggers the thread switching code, too).

Of course, as other posters have said, this goes away if each has its own process.

Coincidentally, the slides and video explain why you can't CTRL+C in Python sometimes.

like image 186
Richard Levasseur Avatar answered Jan 30 '23 13:01

Richard Levasseur


The key is to launch the application instances in separate processes. Otherwise multi-threading issues seem to be likely to follow.

like image 41
Tom Leys Avatar answered Jan 30 '23 14:01

Tom Leys