I use the jaxws-maven-plugin
v2.1 to generate a web service class. I also include the following Jetty dependencies:
org.eclipse.jetty.jetty-server
org.eclipse.jetty.jetty-sevlet
org.eclipse.jetty.jetty-http-spi
(The version in question is 9.2.10.v20150310, but apart from knowing its Jetty 9, the question transcends the particular minor version number of Jetty).
Using the following code, I can successfully publish the service:
System.setProperty("com.sun.net.httpserver.HttpServerProvider",
"org.eclipse.jetty.http.spi.JettyHttpServerProvider");
final String url = "http://localhost/SlipperySoap";
final SlipperySoap service = new SlipperySoapImpl();
final Endpoint endpoint = Endpoint.publish(url, service);
From time to time, but on a fairly regular basis, I get warnings out of Jetty:
badMessage: java.lang.IllegalStateException: too much data after closed for HttpChannelOverHttp@38f120bc{r=1,a=IDLE,uri=-}
These warnings often occur in batches, as the following timestamps show:
08:33:43.510 [pool-1-thread-4641] WARN : HttpParser: badMessage: ...
08:33:47.778 [pool-1-thread-4556] WARN : HttpParser: badMessage: ...
08:33:48.340 [pool-1-thread-4612] WARN : HttpParser: badMessage: ...
08:33:49.037 [pool-1-thread-4567] WARN : HttpParser: badMessage: ...
08:33:49.112 [pool-1-thread-4721] WARN : HttpParser: badMessage: ...
08:33:49.242 [pool-1-thread-4579] WARN : HttpParser: badMessage: ...
08:33:49.344 [pool-1-thread-4698] WARN : HttpParser: badMessage: ...
08:33:49.470 [pool-1-thread-4504] WARN : HttpParser: badMessage: ...
08:33:50.260 [pool-1-thread-4553] WARN : HttpParser: badMessage: ...
08:33:54.834 [pool-1-thread-4721] WARN : HttpParser: badMessage: ...
08:34:01.194 [pool-1-thread-4763] WARN : HttpParser: badMessage: ...
08:34:01.675 [pool-1-thread-4715] WARN : HttpParser: badMessage: ...
08:34:02.262 [pool-1-thread-4504] WARN : HttpParser: badMessage: ...
08:34:02.880 [pool-1-thread-4699] WARN : HttpParser: badMessage: ...
08:34:05.530 [pool-1-thread-4570] WARN : HttpParser: badMessage: ...
08:34:09.634 [pool-1-thread-4570] WARN : HttpParser: badMessage: ...
08:34:10.081 [pool-1-thread-4504] WARN : HttpParser: badMessage: ...
08:34:10.298 [pool-1-thread-4738] WARN : HttpParser: badMessage: ...
08:34:13.508 [pool-1-thread-4688] WARN : HttpParser: badMessage: ...
08:34:23.360 [pool-1-thread-4755] WARN : HttpParser: badMessage: ...
08:34:23.557 [pool-1-thread-4717] WARN : HttpParser: badMessage: ...
I am trying to understand if this is a legitimate problem that is causing me to miss the handling of these requests, or if this is a benign warning that can be suppressed and ignored.
If it is a legitimate problem, is there any action I can take on the server side to eliminate it (or is this the problem of a mis-behaving client?).
Note I have no control over the clients that connect to this web service. Therefore any solution must be effected on the server-side.
There is this Stack Overflow post "Jetty too much data after closed for HttpChannelOverHttp", but the answers are all low-quality and don't seek to understand the core problem. I tried changing the Jetty default buffer sizes, but that did nothing to alleviate the problem (which doesn't surprise me).
A Google Groups discussion suggests:
This is just a warning and should be ok. IIRC this happens when a connection is forcefully closed.
This doesn't provide enough detail for me to be satisfied that it's a benign message - there's data in a buffer that's discarded rather than processed, so I need to understand what causes this situation. Why would the other end send a request and then close the connection before reading the response? Is there any other explanation for this message?
The Jetty source code says this:
/**
* Parse until next Event.
* @return True if an {@link RequestHandler} method was called and it returned true;
*/
public boolean parseNext(ByteBuffer buffer)
{
...
// handle end states
if (_state==State.END)
{
// eat white space
while (buffer.remaining()>0 && buffer.get(buffer.position())<=HttpTokens.SPACE)
buffer.get();
}
else if (_state==State.CLOSED)
{
if (BufferUtil.hasContent(buffer))
{
// Just ignore data when closed
_headerBytes+=buffer.remaining();
BufferUtil.clear(buffer);
if (_maxHeaderBytes>0 && _headerBytes>_maxHeaderBytes)
{
// Don't want to waste time reading data of a closed request
throw new IllegalStateException("too much data after closed");
}
}
}
...
This again raises the question, why is there data left to be processed after the state has progressed to closed. Is this the signature of misbehaving client that isn't waiting to read a response before closing the TCP connection?
This is the result of a client that issues a request, with the http behavior that means the exchange (request and/or response) should close the connection (request header initiated or server header initiated), but the client is still sending content after the request (its headers and body content).
Most common scenario: a badly written client that expects the connection to be persistent, but doesn't follow the http spec and look at the headers indicating the connection state, and then proceeds to issue another request.
Another common scenario: a badly written client sends the request headers and (optional) request body, but also sends a few extra bytes past the request body. This will get interpreted as another request that needs to be processed, but the initial exchange has the connection being closed, making the parsing of these extra bytes toss the warning for being "too much data after close"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With