Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multithreading with websockets

This is more a design question. I have the following implementation

Multiple Client connections -----> Server ------> Corresponding DB conns

The client/server communication is done using web sockets. It's a single threaded application currently. Evidently, this design does not scale as the the load on the server is too high and response back to the clients takes too long. Back end operations involve handling large amounts of data.

My question: is it a good idea to create a new thread for every web socket connection? This would imply 500 threads for 500 clients (the number of web sockets would be the same whether it's multi-threading or single threaded). This would ease the load on the server and hence would make life a lot more easier.

or

Is there a better logic to attain scalability? One of them could be create threads on the merit of the job and get the rest processed by the main thread. This somehow seems to be going back to the same problem again in the future.

Any help here would be greatly appreciated.

like image 349
user3616977 Avatar asked Jul 08 '15 13:07

user3616977


2 Answers

There are two approaches to this kind of problem

  • one thread per request
  • a fixed number of threads to manage all requests

Actually you are using the second approach but using only 1 thread.

You can improve it using a pool of thread to handle your requests instead of only one.

The number of threads to use for the second approach depends on your application. If you have a strong use of cpu and a certain number of long I/O operations (read or write to disk or network) you can increase this number.

If you haven't I/O operations the number of thread should be closer to the number of cpu cores.

Note: existing web servers use this two approaches for http requests. Just as an example Apache use the first (one thread for one request) and NodeJs use the second (it is event driven).

In any case use a system of timeout to unblock very long requests before server crashes.

like image 114
Davide Lorenzo MARINO Avatar answered Oct 23 '22 18:10

Davide Lorenzo MARINO


You can have a look at two very good scalable web servers, Apache and Node.js.

Apache, when operating in multi-threaded (worker) mode, will create new threads for new connections (note that requests from the same browser are served from the same thread, via keep-alive).

Node.js is vastly different, and uses an asynschronous workflow by delegating tasks.

Consequently, Apache scales very well for computationally intensive tasks, while Node.js scales well for multiple (huge) small, event based requests.

You mention that you do some heavy tasks on the backend. This means that you should create multiple threads. How? Create a thread queue, with a MAX_THREADS limit, and a MAX_THREADS_PER_CLIENT limit, serving repeated requests by a client using the same thread. Your main thread must only spawn new threads.

If you can, you can incorporate some good Node.js features as well. If some task on the thread is taking too long, kill that thread with a callback for the task to create a new one when the job is done. You can do a benchmark to even train a NN to find out when to do this!

Have a blast!

like image 2
xyz Avatar answered Oct 23 '22 18:10

xyz