i tested HttpResponse#flushBuffer
and PrintWriter#flush
on Tomcat 7
below, but it seemed that the response rather ignored them than flushed the content over the wire asap as expected.
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
@WebServlet("/HelloServlet")
public class HelloServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
PrintWriter pw = response.getWriter();
pw.println("say hi now");
pw.flush();
response.flushBuffer();
try {
Thread.sleep(5000);
} catch (Exception e) {
}
pw.println("say bye in 5 seconds");
}
}
The brower displayed "hi" and "bye" together after the delay. Is it a misbehavior or intended?
@EDIT
According to @Tomasz Nurkiewicz
's suggestion, i tested again with curl
then the issue was gone. It seems that standard browsers and tcp/ip monitor
pack small pieces of contents
from the same http response to render them together.
@EDIT 2
It's also observed that both HttpResponse#flushBuffer
and PrintWriter#flush
drive Tomcat 7
to send the client chunked data.
I just had this same issue. To stop browsers waiting till the page finishes loading before it does any rendering you need to start with:
response.setContentType("text/html;charset=UTF-8");
The API for flushBuffer()
is very precise:
Forces any content in the buffer to be written to the client. A call to this method automatically commits the response, meaning the status code and headers will be written.
So either Tomcat is not implemented according to the spec (buffers more aggressively and holds flushes if they are too small) or the client (browser) waits for more input before actually rendering it.
Can you try with curl or nc
instead?
I had that issue, too. And I found, too, that the issue goes away with curl. With some sniffing, it turned out that the culprit is gzip encoding. In order to compress the response, gzip waits until the underlying PrintWriter is closed (that is, until the full response is written) and then produces the compressed output. On the client side, this means that you do not get anything back until the full response is ready. Curl, on the other hand, does not issue an Accept-Encoding: gzip to the server, and that's why the thing works, and you can get the chunked output normally as intended.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With