Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java Massive Multiplayer Game Server Scaleability

I've created a massively multiplayer online game for Android called The Infinite Black: https://market.android.com/details?id=theinfiniteblack.client

In my naivety, I was expecting moderate growth rate of about 1,000 players a month, and needing to manage ~20 live TCP/IP clients tops.

The game has had unexpectedly explosive growth with over 40,000 new users in a week, and is averaging ~300 live connections at once, and growing exponentially.

The server architecture consists of 2 threads per connection (blocking read/write), one ServerSocket thread to spawn off new clients, and one controller thread that polls each client for new actions, applies it to the game world, then flushes data back out when done.

The server is built in Java which I'm not very well versed in, particularly in a high-stress situation such as this. C# has really spoiled me when it comes to memory and thread management.

To get to the point.. I've just ordered two very powerful systems to run as dedicated game servers and want to maximize resource use. A lot of the information on Java resource configuration has proven to be misleading, incorrect, or out of date.

I currently use -Xss512k as my launch argument, and understand that this dictates stack size allotment for each thread, but I don't fully comprehend all that it may entail. What tools or methods are available to tell me if I'm overshooting the mark and can scale it down? What other command line arguments should I consider?

The new servers have 16gb of RAM and i7-2600K Sandy Bridge 3.4GHz processors: What options are available in configuration to take as much advantage of this as possible? My goal is 1,200 online clients at once per server (2,400 threads).

What kind of unexpected pitfalls and problems should I be concerned with?

I've read wildly conflicting stories on maximum thread counts: Will things fall apart if I'm trying to push 2,400 active threads?

Java doesn't seem like it was designed for this type of task. Should I consider migrating the server to another language?

I currently run the server in debug mode out of Eclipse while it's in development (ugh..)

This is my Eclipse .ini configuration:

--launcher.XXMaxPermSize 256M

-Xms256m

-Xmx1024m

like image 738
Whalesong Games Avatar asked Sep 11 '11 15:09

Whalesong Games


2 Answers

You haven't made it clear where your doubt comes from.

Plurk Comet: Handling 100,000+ Concurrent Connections with Netty (2009)

In 1999 I deployed a Java web server which handled 40,000 yellow pages search queries per hour (the servers had 400 MHz CPUs) and in 2004 I development a Java application which handled 8000 concurrent connections per server (on a dual 1.2 GHz Sparc servers) There were six gateway servers and one main server to control them and centralise events.

Your profile is likely to be different, but I can say that Java was supporting high volume web servers before C# was released.

Personally I wouldn't have more than 10,000 concurrent connection per server, but this is just a rule of thumb which may no longer hold. You can have 32,000 threads in a single JVM. On Linux it doesn't go much beyond this. However I would have multiple gateway JVMs on a single server to minimise your full GC times (the best way to minimise full GC times is to discard less garbage, but this could require more effort)

The new servers have 16gb of RAM and i7-2600K Sandy Bridge 3.4GHz processors: What options are available in configuration to take as much advantage of this as possible? My goal is 1,200 online clients at once per server (2,400 threads).

I can't imagine why this would be a problem.

What kind of unexpected pitfalls and problems should I be concerned with?

Thinking you need to turn every possible command line parameter when it is likely you can take all of them away. If you have 4 gateway JVMs with 300 connection each, this can use all the memory and you won't even need to specify the -Xmx setting.

Java doesn't seem like it was designed for this type of task. Should I consider migrating the server to another language?

You are better off asking yourself why you believe this. You have a problem which should be simple to solve or a doubt which may or may not be unfounded.

This is my Eclipse .ini configuration:

How you configure eclipse has no barring on how any program run from eclipse is set.

BufferedOutputStream is fine for most applications and is likely to be fine for up to 1000 connections in a JVM. However Java 1.4 (2002) added NIO which is a more light weight for scaling your system to 10,000 connections and beyond.

BTW: The server I developed in 2003 was based on NIO dispatcher, but its pretty complicated unless you use a standard library like Netty.

Since then I have use a single thread per connection model for blocking NIO successfully. I believe this is simpler to manage than using a dispatcher and can have lower latency characteristics. I have a monitor thread which periodically checks connections are not blocking on writes and closes them if they are. I don't believe you need two threads per connection, but I don't believe it will make a difference in your situation because you won't have enough connections per server to matter.

As glowcoder suggests have you considered using UDP for less critical broadcast information?

like image 104
Peter Lawrey Avatar answered Oct 21 '22 16:10

Peter Lawrey


In Java, each thread will take the same amount of memory on the stack as any other thread. This means your main thread, let's say it has a reserved size of 32k (which I think is default) will be the same reserved size as your communication threads (which probably only need 1k, if you think about it!) This is why Java came up with nio - so you don't need a thread per connection.

Let's give the 1g of RAM example. With 32k per thread, assuming we have half our memory for stack and half for heap, we end up with 512 available for stack. This gives us room for 16,384 threads. This also means our thread scheduler has to juggle 16,384 threads. This greatly increases the chance that one of the threads will get starved. Now if one person gets starved, well sucks to be him; if main gets starved, sucks to be ... everyone!

With nio, you have... two threads. Main, and communication. (You could even do it without the communication thread, actually...). Now actually you probably have a bit more than that as you have a game loop and such. But still 10 threads is much easier to schedule properly than 16k threads!

Nio isn't necessarily intuitive to learn, but it's well worth it.

One thing I would consider if you're not going to use nio is to only have 1 thread per connection instead of two. You don't need a second one for writing: you can have a thread with a queue and have it do all the writing for all clients. That will at least double your throughput for the time being.

like image 26
corsiKa Avatar answered Oct 21 '22 16:10

corsiKa