Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Thread per connection vs Reactor pattern (with a thread pool)?

I want to write a simple multiplayer game as part of my C++ learning project.

So I thought, since I am at it, I would like to do it properly, as opposed to just getting-it-done.

If I understood correctly: Apache uses a Thread-per-connection architecture, while nginx uses an event-loop and then dedicates a worker [x] for the incoming connection. I guess nginx is wiser, since it supports a higher concurrency level. Right?

I have also come across this clever analogy, but I am not sure if it could be applied to my situation. The analogy also seems to be very idealist. I have rarely seen my computer run at 100% CPU (even with a umptillion Chrome tabs open, Photoshop and what-not running simultaneously)

Also, I have come across a SO post (somehow it vanished from my history) where a user asked how many threads they should use, and one of the answers was that it's perfectly acceptable to have around 700, even up to 10,000 threads. This question was related to JVM, though.

So, let's estimate a fictional user-base of around 5,000 users. Which approach should would be the "most concurrent" one?

  1. A reactor pattern running everything in a single thread.
  2. A reactor pattern with a thread-pool (approximately, how big do you suggest the thread pool should be?
  3. Creating a thread per connection and then destroying the thread the connection closes.

I admit option 2 sounds like the best solution to me, but I am very green in all of this, so I might be a bit naive and missing some obvious flaw. Also, it sounds like it could be fairly difficult to implement.

PS: I am considering using POCO C++ Libraries. Suggesting any alternative libraries (like boost) is fine with me. However, many say POCO's library is very clean and easy to understand. So, I would preferably use that one, so I can learn about the hows of what I'm using.

like image 572
omninonsense Avatar asked Jan 14 '13 11:01

omninonsense


4 Answers

Reactive Applications certainly scale better, when they are written correctly. This means

  • Never blocking in a reactive thread:
    • Any blocking will seriously degrade the performance of you server, you typically use a small number of reactive threads, so blocking can also quickly cause deadlock.
    • No mutexs since these can block, so no shared mutable state. If you require shared state you will have to wrap it with an actor or similar so only one thread has access to the state.
  • All work in the reactive threads should be cpu bound
    • All IO has to be asynchronous or be performed in a different thread pool and the results feed back into the reactor.
    • This means using either futures or callbacks to process replies, this style of code can quickly become unmaintainable if you are not used to it and disciplined.
  • All work in the reactive threads should be small
    • To maintain responsiveness of the server all tasks in the reactor must be small (bounded by time)
    • On an 8 core machine you cannot cannot allow 8 long tasks arrive at the same time because no other work will start until they are complete
    • If a tasks could take a long time it must be broken up (cooperative multitasking)

Tasks in reactive applications are scheduled by the application not the operating system, that is why they can be faster and use less memory. When you write a Reactive application you are saying that you know the problem domain so well that you can organise and schedule this type of work better than the operating system can schedule threads doing the same work in a blocking fashion.

I am a big fan of reactive architectures but they come with costs. I am not sure I would write my first c++ application as reactive, I normally try to learn one thing at a time.

If you decide to use a reactive architecture use a good framework that will help you design and structure your code or you will end up with spaghetti. Things to look for are:

  • What is the unit of work?
  • How easy is it to add new work? can it only come in from an external event (eg network request)
  • How easy is it to break work up into smaller chunks?
  • How easy is it to process the results of this work?
  • How easy is it to move blocking code to another thread pool and still process the results?

I cannot recommend a C++ library for this, I now do my server development in Scala and Akka which provide all of this with an excellent composable futures library to keep the code clean.

Best of luck learning C++ and with which ever choice you make.

like image 110
iain Avatar answered Oct 30 '22 20:10

iain


Option 2 will most efficiently occupy your hardware. Here is the classic article, ten years old but still good.

http://www.kegel.com/c10k.html

The best library combination these days for structuring an application with concurrency and asynchronous waiting is Boost Thread plus Boost ASIO. You could also try a C++11 std thread library, and std mutex (but Boost ASIO is better than mutexes in a lot of cases, just always callback to the same thread and you don't need protected regions). Stay away from std future, cause it's broken:

http://bartoszmilewski.com/2009/03/03/broken-promises-c0x-futures/

The optimal number of threads in the thread pool is one thread per CPU core. 8 cores -> 8 threads. Plus maybe a few extra, if you think it's possible that your threadpool threads might call blocking operations sometimes.

like image 26
James Brock Avatar answered Oct 30 '22 22:10

James Brock


FWIW, Poco supports option 2 (ParallelReactor) since version 1.5.1

like image 3
Alex Avatar answered Oct 30 '22 22:10

Alex


I think that option 2 is the best one. As for tuning of the pool size, I think the pool should be adaptive. It should be able to spawn more threads (with some high hard limit) and remove excessive threads in times of low activity.

like image 1
wilx Avatar answered Oct 30 '22 22:10

wilx