At first, I must ask that which is the best in which states ? For example a real-time MMORPG server. What if i create a thread per client instead of using non-blocking sockets ? Or What if i use one thread that contains all non-blocking sockets ? Can you explain me the advantages ?
In blocking mode, the recv, send, connect (TCP only) and accept (TCP only) socket API calls will block indefinitely until the requested action has been performed. In non-blocking mode, these functions return immediately. select will block until the socket is ready.
In blocking socket mode, a system call event halts the execution until an appropriate reply has been received. In non-blocking sockets, it continues to execute even if the system call has been invoked and deals with its reply appropriately later.
A socket can be in "blocking mode" or "nonblocking mode." The functions of sockets in blocking (or synchronous) mode do not return until they can complete their action. This is called blocking because the socket whose function was called cannot do anything — is blocked — until the call returns.
Your question deserves a much longer discussion but here's a short stab at an answer:
With non-blocking sockets (on Windows) you have a couple of options:
Overlapped I/O will give you the best performance (thousands of sockets / process) at the expense of being the most complicated model to understand and implement correctly.
Basically it comes down to performance vs. programming complexity.
NOTE
Here's a better explanation of why using a thread/socket model is a bad idea:
In windows, creating a large number of threads is highly inefficient because the scheduler is unable to properly determine which threads should be receiving processor time and which shouldn't. That coupled with the memory overhead of each thread means that you will run out of memory (because of stack space) and processor cycles (because of overhead in managing threads) at the OS level long before you will run out of capacity to handle socket connections.
I will go on record as saying that for almost anything except toy programs, you should use non-blocking sockets as a matter of course.
Blocking sockets cause a serious problem: if the machine on the other end (or any part of your connection to it) fails during a blocking call, your code will end up blocked until the IP stack's timeout. In a typical case, that's around 2 minutes, which is completely unacceptable for most purposes. The only way1 to abort that blocking call is to terminate the thread that made it -- but terminating a thread is itself almost always unacceptable, as it's essentially impossible to clean up after it and reclaim whatever resources it had allocated. Non-blocking sockets make it trivial to abort a call when/if needed, without doing anything to the thread that made the call.
It is possible to make blocking sockets work sort of well if you use a multi-process model instead. Here, you simply spawn an entirely new process for each connection. That process uses a blocking socket, and when/if something goes wrong, you just kill the entire process. The OS knows how to clean up the resources from a process, so cleanup isn't a problem. It still has other potential problems though: 1) you pretty much need a process monitor to kill processes when needed, and 2) spawning a process is usually quite a bit more expensive than just creating a socket. Nonetheless, this can be a viable option, especially if:
1. Well, technically not the only possible way, but most of the alternatives are relatively ugly--to be more specific, I think by the time you add code to figure out that there's a problem, and then fix the problem, you've probably done more extra work than if you just use a non-blocking socket.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With