I want to allow clients (including very slow clients) to download large files from an JAX-RS (jersey) web service and I'm stuck. It seems like async fatures in JAX-RS do not support this.
AsyncResponse
solves the issue if you have to wait for a resource to become available on server side, but then you are only allowed to call AsyncResponse.resume(Object)
only once. After that, the response is handled normally. Slow or malicious clients will blocks a worker thread until all bytes are transferred. No async IO here.ChunkedOutput
in jersey stores the chunks in an unbounded in-memory queue and does not offer any public interface to check the size of that queue. It's designed for a slow stream of small chunks. Enough slow clients will eventually cause an OutOfMemoryError
.StreamingOutput
is not asynchronous at all. The StreamingOutput.write(OutputStream)
method is supposed to block until all bytes are written.HttpServletRequest.startAsync
) from within an JAX-RS request handler without breaking jerseys internals. -> IllegalStateException
Am I not seeing the obvious solution?
With reasonably new versions of jersey and jetty, the following works:
@Suspended AsyncResponse
into your jax-rs request handler method. This tells jersey to enter async-mode and keep the request open.@Context HttpServletRequest
to access servlet-level APIs.HttpServletRequest.getAsyncContext()
instead of HttpServletRequest.startAsync()
, because jersey already switched to async mode and doing so again results in an IllegalStateException
(that was my problem from above).AsyncContext
as you'd do in a servlet environment. Jersey does not complain.AsyncContext.complete()
and then AsyncResponse.cancel()
. The latter is optional I think.I managed to serve a 10GB file to 100 concurrent clients this way. The thread count never exceeded ~40 threads and memory consumption was low. The throughput was about ~3GB/s on my laptop, which is kinda impressive.
@GET
public void doAsync(@Suspended final AsyncResponse asyncResponse,
@Context HttpServletRequest servletRequest)
throws IOException {
assert servletRequest.isAsyncStarted();
final AsyncContext asyncContext = servletRequest.getAsyncContext();
final ServletOutputStream s = asyncContext.getResponse().getOutputStream();
s.setWriteListener(new WriteListener() {
volatile boolean done = false;
public void onWritePossible() throws IOException {
while (s.isReady()) {
if(done) {
asyncContext.complete();
asyncResponse.isCancelled();
break;
} else {
s.write(...);
done = true;
}
}
}
});
}
I had similar problem like you. I needed to transfer big amounts of data between two instances of my application. Initially I used simple StreamingOutput approach but very soon I understood that this will not works as the client party was quite slower compared to server party and I have been getting TimeOutException. I was able to solve this by setting up my Grizzly server. In this way I can transfer with the StreamingOutput approach hundreds of megabytes.My code for setting the timeout is like this:
Collection<NetworkListener> listeners = server.getListeners();
for(NetworkListener listener : listeners) {
final TCPNIOTransport transport = listener.getTransport();
transport.setKeepAlive(true);
transport.setWriteTimeout(0, TimeUnit.MINUTES);
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With