Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Weird Tomcat outage, possibly related to maxConnections

In my company we experienced a serious problem today: our production server went down. Most people accessing our software via a browser were unable to get a connection, however people who had already been using the software were able to continue using it. Even our hot standby server was unable to communicate with the production server, which it does using HTTP, not even going out to the broader internet. The whole time the server was accessible via ping and ssh, and in fact was quite underloaded - it's normally running at 5% CPU load and it was even lower at this time. We do almost no disk i/o.

A few days after the problem started we have a new variation: port 443 (HTTPS) is responding but port 80 stopped responding. The server load is very low. Immediately after restarting tomcat, port 80 started responding again.

We're using tomcat7, with maxThreads="200", and using maxConnections=10000. We serve all data out of main memory, so each HTTP request completes very quickly, but we have a large number of users doing very simple interactions (this is high school subject selection). But it seems very unlikely we would have 10,000 users all with their browser open on our page at the same time.

My question has several parts:

  • Is it likely that the "maxConnections" parameter is the cause of our woes?
  • Is there any reason not to set "maxConnections" to a ridiculously high value e.g. 100,000? (i.e. what's the cost of doing so?)
  • Does tomcat output a warning message anywhere once it hits the "maxConnections" message? (We didn't notice anything).
  • Is it possible there's an OS limit we're hitting? We're using CentOS 6.4 (Linux) and "ulimit -f" says "unlimited". (Do firewalls understand the concept of Tcp/Ip connections? Could there be a limit elsewhere?)
  • What happens when tomcat hits the "maxConnections" limit? Does it try to close down some inactive connections? If not, why not? I don't like the idea that our server can be held to ransom by people having their browsers on it, sending the keep-alive's to keep the connection open.

But the main question is, "How do we fix our server?"

More info as requested by Stefan and Sharpy:

  • Our clients communicate directly with this server
  • TCP connections were in some cases immediately refused and in other cases timed out
  • The problem is evident even connecting my browser to the server within the network, or with the hot standby server - also in the same network - unable to do database replication messages which normally happens over HTTP
  • IPTables - yes, IPTables6 - I don't think so. Anyway, there's nothing between my browser and the server when I test after noticing the problem.

More info: It really looked like we had solved the problem when we realised we were using the default Tomcat7 setting of BIO, which has one thread per connection, and we had maxThreads=200. In fact 'netstat -an' showed about 297 connections, which matches 200 + queue of 100. So we changed this to NIO and restarted tomcat. Unfortunately the same problem occurred the following day. It's possible we misconfigured the server.xml.

The server.xml and extract from catalina.out is available here: https://www.dropbox.com/sh/sxgd0fbzyvuldy7/AACZWoBKXNKfXjsSmkgkVgW_a?dl=0

More info: I did a load test. I'm able to create 500 connections from my development laptop, and do an HTTP GET 3 times on each, without any problem. Unless my load test is invalid (the Java class is also in the above link).

like image 261
Tim Cooper Avatar asked Sep 10 '14 13:09

Tim Cooper


People also ask

What is Tomcat acceptCount?

acceptCount : the maximum number of TCP requests that can wait in a queue at the OS level when there are no worker threads available. The default value is 100.

What is Maxthreads in Tomcat?

By default, if the maximum threads value is not set, Tomcat uses a default value of 200 maximum threads. Here is an example: <connector connectiontimeout="20000" maxthreads="400" port="8080" protocol="HTTP/1.1" redirectport="8443" />

How many sessions can Tomcat handle?

The default installation of Tomcat sets the maximum number of HTTP servicing threads at 200. Effectively, this means that the system can handle a maximum of 200 simultaneous HTTP requests.


3 Answers

It's hard to tell for sure without hands-on debugging but one of the first things I would check would be the file descriptor limit (that's ulimit -n). TCP connections consume file descriptors, and depending on which implementation is in use, nio connections that do polling using SelectableChannel may eat several file descriptors per open socket.

To check if this is the cause:

  • Find Tomcat PIDs using ps
  • Check the ulimit the process runs with: cat /proc/<PID>/limits | fgrep 'open files'
  • Check how many descriptors are actually in use: ls /proc/<PID>/fd | wc -l

If the number of used descriptors is significantly lower than the limit, something else is the cause of your problem. But if it is equal or very close to the limit, it's this limit which is causing issues. In this case you should increase the limit in /etc/security/limits.conf for the user with whose account Tomcat is running and restart the process from a newly opened shell, check using /proc/<PID>/limits if the new limit is actually used, and see if Tomcat's behavior is improved.

like image 171
Michał Kosmulski Avatar answered Nov 07 '22 06:11

Michał Kosmulski


While I don't have a direct answer to solve your problem, I'd like to offer my methods to find what's wrong.

Intuitively there are 3 assumptions:

  1. If your clients hold their connections and never release, it is quite possible your server hits the max connection limit even there is no communications.
  2. The non-responding state can also be reached via various ways such as bugs in the server-side code.
  3. The hardware conditions should not be ignored.

To locate the cause of this problem, you'd better try to replay the scenario in a testing environment. Perform more comprehensive tests and record more detailed logs, including but not limited:

  • Unit tests, esp. logic blocks using transactions, threading and synchronizations.
  • Stress-oriented tests. Try to simulate all the user behaviors you can come up with and their combinations and test them in a massive batch mode. (ref)
  • More specified Logging. Trace client behaviors and analysis what happened exactly before the server stopped responding.
  • Replace a server machine and see if it will still happen.
like image 44
lowatt Avatar answered Nov 07 '22 06:11

lowatt


The short answer:

  • Use the NIO connector instead of the default BIO connector
  • Set "maxConnections" to something suitable e.g. 10,000
  • Encourage users to use HTTPS so that intermediate proxy servers can't turn 100 page requests into 100 tcp connections.
  • Check for threads hanging due to deadlock problems, e.g. with a stack dump (kill -3)
  • (If applicable and if you're not already doing this, write your client app to use the one connection for multiple page requests).

The long answer:

We were using the BIO connector instead of NIO connector. The difference between the two is that BIO is "one thread per connection" and NIO is "one thread can service many connections". So increasing "maxConnections" was irrelevant if we didn't also increase "maxThreads", which we didn't, because we didn't understand the BIO/NIO difference.

To change it to NIO, put this in the element in server.xml: protocol="org.apache.coyote.http11.Http11NioProtocol"

From what I've read, there's no benefit to using BIO so I don't know why it's the default. We were only using it because it was the default and we assumed the default settings were reasonable and we didn't want to become experts in tomcat tuning to the extent that we now have.

HOWEVER: Even after making this change, we had a similar occurrence: on the same day, HTTPS became unresponsive even while HTTP was working, and then a little later the opposite occurred. Which was a bit depressing. We checked in 'catalina.out' that in fact the NIO connector was being used, and it was. So we began a long period of analysing 'netstat' and wireshark. We noticed some periods of high spikes in the number of connections - in one case up to 900 connections when the baseline was around 70. These spikes occurred when we synchronised our databases between the main production server and the "appliances" we install at each customer site (schools). The more we did the synchronisation, the more we caused outages, which caused us to do even more synchronisations in a downward spiral.

What seems to be happening is that the NSW Education Department proxy server splits our database synchronisation traffic into multiple connections so that 1000 page requests become 1000 connections, and furthermore they are not closed properly until the TCP 4 minute timeout. The proxy server was only able to do this because we were using HTTP. The reason they do this is presumably load balancing - they thought by splitting the page requests across their 4 servers, they'd get better load balancing. When we switched to HTTPS, they are unable to do this and are forced to use just one connection. So that particular problem is eliminated - we no longer see a burst in the number of connections.

People have suggested increasing "maxThreads". In fact this would have improved things but this is not the 'proper' solution - we had the default of 200, but at any given time, hardly any of these were doing anything, in fact hardly any of these were even allocated to page requests.

like image 36
Tim Cooper Avatar answered Nov 07 '22 07:11

Tim Cooper