I recently started learning about node.js, a javascript library on top of V8 known for its non-blocking IO and incredible speed.
To my understanding, node does not wait for IO to respond, but runs an event loop (similar to a game loop) that keeps checking unfinished operations and continues/completes them as soon as IO responds. Node performance was compared to Apache HTTPD with node being significantly faster while using less memory.
Now if you read about Apache, you learn it uses 1 thread per user, which supposedly slows it down significantly and this is where my question appears:
If you compare threads to what node does internally in its event loop, you start seeing the similarities: Both are abstractions of an unfinished process that waits for a resource to respond, both check if the operation made progress regularly and then don't occupy the CPU for a certain amount of time (at least I think a good blocking API sleeps for a few milliseconds before rechecking).
Now where is that striking, critical difference that makes threads so much worse?
The difference here is context switching. Swapping threads by the OS requires:
In the case of an event queue:
If the server is highly loaded, the only overhead of an event queue is a handler return, reading the queue and a handler call. In the threaded approach, there is additional overhead from swapping the threads.
Also, as mentioned by PST, the threaded approach introduces the need for locking. Locking itself is cheap, but waiting for a resource to be released by some other thread requires additional context switching as the waiting thread cannot continue. It is even possible for a thread to be swapped in to acquire a lock only to be swapped out a few clock cycles later because it needs to lock another resource as well. Compare how much is done by the OS (reading the tread queue and swapping call stacks, at the very least) to how much is done by the thread (returning from a call and making another call).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With