Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why should I use non-blocking or blocking sockets? [closed]

At first, I must ask that which is the best in which states ? For example a real-time MMORPG server. What if i create a thread per client instead of using non-blocking sockets ? Or What if i use one thread that contains all non-blocking sockets ? Can you explain me the advantages ?

like image 684
deniz Avatar asked May 18 '12 14:05

deniz


People also ask

What is the difference between blocking and non-blocking sockets?

In blocking mode, the recv, send, connect (TCP only) and accept (TCP only) socket API calls will block indefinitely until the requested action has been performed. In non-blocking mode, these functions return immediately. select will block until the socket is ready.

What is blocking and non-blocking in socket programming?

In blocking socket mode, a system call event halts the execution until an appropriate reply has been received. In non-blocking sockets, it continues to execute even if the system call has been invoked and deals with its reply appropriately later.

What are the blocking socket and blocking functions?

A socket can be in "blocking mode" or "nonblocking mode." The functions of sockets in blocking (or synchronous) mode do not return until they can complete their action. This is called blocking because the socket whose function was called cannot do anything — is blocked — until the call returns.


2 Answers

Your question deserves a much longer discussion but here's a short stab at an answer:

  • using blocking sockets means that only one socket may be active at any time in any one thread (because it blocks while waiting for activity)
  • using blocking sockets is generally easier than non-blocking sockets (asynchronous programming tends to be more complicated)
  • you could create 1 thread per socket as you stated but threads have overhead and are extremely inefficient compared to the non-blocking solutions;
  • with non-blocking sockets you could handle a much larger volume of clients: it could scale to hundreds of thousands in a single process - but the code becomes a little bit more complicated

With non-blocking sockets (on Windows) you have a couple of options:

  • polling
  • events based
  • overlapped I/O

Overlapped I/O will give you the best performance (thousands of sockets / process) at the expense of being the most complicated model to understand and implement correctly.

Basically it comes down to performance vs. programming complexity.

NOTE

Here's a better explanation of why using a thread/socket model is a bad idea:

In windows, creating a large number of threads is highly inefficient because the scheduler is unable to properly determine which threads should be receiving processor time and which shouldn't. That coupled with the memory overhead of each thread means that you will run out of memory (because of stack space) and processor cycles (because of overhead in managing threads) at the OS level long before you will run out of capacity to handle socket connections.

like image 154
Mike Dinescu Avatar answered Oct 03 '22 01:10

Mike Dinescu


I will go on record as saying that for almost anything except toy programs, you should use non-blocking sockets as a matter of course.

Blocking sockets cause a serious problem: if the machine on the other end (or any part of your connection to it) fails during a blocking call, your code will end up blocked until the IP stack's timeout. In a typical case, that's around 2 minutes, which is completely unacceptable for most purposes. The only way1 to abort that blocking call is to terminate the thread that made it -- but terminating a thread is itself almost always unacceptable, as it's essentially impossible to clean up after it and reclaim whatever resources it had allocated. Non-blocking sockets make it trivial to abort a call when/if needed, without doing anything to the thread that made the call.

It is possible to make blocking sockets work sort of well if you use a multi-process model instead. Here, you simply spawn an entirely new process for each connection. That process uses a blocking socket, and when/if something goes wrong, you just kill the entire process. The OS knows how to clean up the resources from a process, so cleanup isn't a problem. It still has other potential problems though: 1) you pretty much need a process monitor to kill processes when needed, and 2) spawning a process is usually quite a bit more expensive than just creating a socket. Nonetheless, this can be a viable option, especially if:

  1. You're dealing with a small number of connections at a time
  2. You generally do extensive processing for each connection
  3. You're dealing only with local hosts so your connection to them is fast and dependable
  4. You're more concerned with optimizing development than execution

1. Well, technically not the only possible way, but most of the alternatives are relatively ugly--to be more specific, I think by the time you add code to figure out that there's a problem, and then fix the problem, you've probably done more extra work than if you just use a non-blocking socket.

like image 24
Jerry Coffin Avatar answered Oct 03 '22 03:10

Jerry Coffin