I am trying to implement a servlet for streaming large objects:
oracle.sql.BLOB blob = rs.getBLOB('obj');
InputStream in = blob.getBinaryStream();
int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
ServletOutputStream out = response.getOutputStream();
int counter=0
while((length=in.read(buffer)) != -1){
out.write(buffer,0,length);
counter++;
if(counter % 10 == 0){
counter=0;
response.flushBuffer();
}
This code suppose to send data to client chunk by chunck. Now what's happening is that when I stream large object (100 MB), memory goes up, and server dies sometimes if there are more than one parallel downloads/stream.
Why this flushBuffer()
is not sending data to client? The client get popup for open/save file only after response closes.
You have to set the Content-Length
header before writing the data, or the server is forced to buffer all the data until the stream is closed, at which point it can calculate the value itself, write the header, and send all the data. As soon as you get the output stream, before you write any data at all, set the content length:
response.setHeader("Content-Length", String.valueOf(blob.length()));
Most servers are smart enough to flush the buffer themselves at this point, so you probably don't even need to call flushBuffer()
-- although it doesn't hurt.
First of all, you need a response header for the servlet
response so that the container can know after how many bytes can the information end:
response.setHeader("Content-Length", String.valueOf(blob.length()));
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With