Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Stream large responses with jersey, asynchronously

I want to allow clients (including very slow clients) to download large files from an JAX-RS (jersey) web service and I'm stuck. It seems like async fatures in JAX-RS do not support this.

  • AsyncResponse solves the issue if you have to wait for a resource to become available on server side, but then you are only allowed to call AsyncResponse.resume(Object) only once. After that, the response is handled normally. Slow or malicious clients will blocks a worker thread until all bytes are transferred. No async IO here.
  • ChunkedOutput in jersey stores the chunks in an unbounded in-memory queue and does not offer any public interface to check the size of that queue. It's designed for a slow stream of small chunks. Enough slow clients will eventually cause an OutOfMemoryError.
  • StreamingOutput is not asynchronous at all. The StreamingOutput.write(OutputStream) method is supposed to block until all bytes are written.
  • The Servlet 3.x API does support what I need, but I cannot find a way to get to servlet level (HttpServletRequest.startAsync) from within an JAX-RS request handler without breaking jerseys internals. -> IllegalStateException

Am I not seeing the obvious solution?

like image 739
defnull Avatar asked Nov 22 '16 13:11

defnull


2 Answers

With reasonably new versions of jersey and jetty, the following works:

  • Inject @Suspended AsyncResponse into your jax-rs request handler method. This tells jersey to enter async-mode and keep the request open.
  • Inject @Context HttpServletRequest to access servlet-level APIs.
  • Call HttpServletRequest.getAsyncContext() instead of HttpServletRequest.startAsync(), because jersey already switched to async mode and doing so again results in an IllegalStateException (that was my problem from above).
  • Use this AsyncContext as you'd do in a servlet environment. Jersey does not complain.
  • Once you are done, call AsyncContext.complete() and then AsyncResponse.cancel(). The latter is optional I think.

I managed to serve a 10GB file to 100 concurrent clients this way. The thread count never exceeded ~40 threads and memory consumption was low. The throughput was about ~3GB/s on my laptop, which is kinda impressive.

@GET
public void doAsync(@Suspended final AsyncResponse asyncResponse,
                    @Context HttpServletRequest servletRequest)
        throws IOException {
    assert servletRequest.isAsyncStarted();
    final AsyncContext asyncContext = servletRequest.getAsyncContext();
    final ServletOutputStream s = asyncContext.getResponse().getOutputStream();

    s.setWriteListener(new WriteListener() {

        volatile boolean done = false;

        public void onWritePossible() throws IOException {
            while (s.isReady()) {
                if(done) {
                    asyncContext.complete();
                    asyncResponse.isCancelled();
                    break;
                } else {
                    s.write(...);
                    done = true;
                }
            }
        }
    });
}
like image 95
defnull Avatar answered Nov 01 '22 18:11

defnull


I had similar problem like you. I needed to transfer big amounts of data between two instances of my application. Initially I used simple StreamingOutput approach but very soon I understood that this will not works as the client party was quite slower compared to server party and I have been getting TimeOutException. I was able to solve this by setting up my Grizzly server. In this way I can transfer with the StreamingOutput approach hundreds of megabytes.My code for setting the timeout is like this:

Collection<NetworkListener> listeners = server.getListeners(); 
for(NetworkListener listener : listeners) {
    final TCPNIOTransport transport = listener.getTransport();
    transport.setKeepAlive(true);
    transport.setWriteTimeout(0, TimeUnit.MINUTES);
}
like image 40
osoitza Avatar answered Nov 01 '22 18:11

osoitza