Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How efficient is Apache Tomcat for Long Polling?

I was going through this Question about long polling where other than providing the solution a interesting point has been made regarding the inefficiency of Apache to handle large number of requests. I had the same concern for Apache Tomcat?

Is Apache Tomcat efficient enough to handle Long polling. I know one thing that Apache Tomcat supports fairly large number of concurrent thread but is it scaled to such a limit that we can use it for Long Polling in the way thread mentioned above explains?

like image 967
Bagira Avatar asked Apr 03 '12 00:04

Bagira


People also ask

What is the advantage of Apache Tomcat?

Apache Tomcat implements Java Servlet, JavaServer Pages (JSP), and the WebSockets Application Programming Interface (API). Essentially, it's a pure Java HTTP web server that enables Java code, and thus gives your website more cross-platform freedom than some of its alternatives.

How many requests can Tomcat handle per second?

The default installation of Tomcat sets the maximum number of HTTP servicing threads at 200. Effectively, this means that the system can handle a maximum of 200 simultaneous HTTP requests.

What is long polling and why would it be beneficial to use?

HTTP Long polling is a mechanism where the server can send data independently or push data to the client without the web client making a request. The information is then pushed as it becomes available, which makes it real-time. However, it works best if the messages from the server are rare and not too frequent.

Which method is used for improving performance of Tomcat with a database?

To improve performance, Tomcat is configured by default to cache static resources. However, the size of the cache must be configured to be large enough to provide performance savings. To tune Tomcat's cache settings, find the Context directive (in server. xml or context.


2 Answers

Are you referring to this comment on the question,

running this on a regular web-server like Apache will quickly tie up all the "worker threads" and
leave it unable to respond to other requests

Recent versions of apache tomcat support comet which allows non blocking IO to allow tomcat to scale to a large number of requests. From this article,

Thanks to the non-blocking I/O capability introduced in Java 4's New I/O APIs for the Java Platform (NIO) package, a persistent HTTP connection doesn't require that a thread be constantly attached to it. Threads can be allocated to connections only when requests are being processed. When a connection is idle between requests, the thread can be recycled, and the connection is placed in a centralized NIO select set to detect new requests without consuming a separate thread. This model, called thread per request, potentially allows Web servers to handle a growing number of user connections with a fixed number of threads. With the same hardware configuration, Web servers running in this mode scale much better than in the thread-per-connection mode. Today, popular Web servers -- including Tomcat, Jetty, GlassFish (Grizzly), WebLogic, and WebSphere -- all use thread per request through Java NIO.

like image 54
sbridges Avatar answered Nov 15 '22 04:11

sbridges


See this report comparing Tomcat and Jetty for Comet:

  • Tomcat tends to have slightly better performance when there are few very busy connections. It has a slight advantage in request latency, which is most apparent when many requests/responses are sent over a few connections without any significant idle time.

  • Jetty tends to have better scalability when there are many connections with significant idle time, as is the situation for most web sites. Jetty's small memory footprint and advance NIO usage allows a larger number of users per unit of available memory. Also the smaller footprint means that less memory and CPU cache is consumed by the servlet container and more cache is available to speed the execution of non-trivial applications.

  • Jetty also has better performance with regards to serving static content, as Jetty is able to use advance memory mapped file buffers combined with NIO gather writes to instruct the operating system to send file content at maximum DMA speed without entering user memory space or the JVM.

If your application will have periods where there are idle connections or clients who are simply waiting for a response from the server, then Jetty would be a better choice over Tomcat. An example would include a Stock Market ticker, where the clients are sending few requests and are just waiting for updates.

Additionally, the Jetty team were the pioneers for Comet, and most of the information and examples that I've found tend to focus solely on Jetty. We've used Jetty on a Comet server since 2008 and have been happy with the results.

The other thing to consider is that Jetty is designed as a standalone web server. This means you don't need an Apache server in front of Jetty. We run Jetty standalone on port 80 to serve all of our application's requests, including the Comet requests.

If you use Tomcat for comet requests, you'll most likely need to allow direct access to port 8080 and bypass Apache, as Apache may negate your long polling.

like image 38
jmort253 Avatar answered Nov 15 '22 03:11

jmort253