General Use-Case
Imagine a client that is uploading large amounts of JSON. The Content-Type should remain application/json
because that describes the actual data. Accept-Encoding and Transfer-Encoding seem to be for telling the server how it should format the response. It appears that responses use the Content-Encoding header explicitly for this purpose, but it is not a valid request header.
Is there something I am missing? Has anyone found an elegant solution?
Specific Use-Case
My use-case is that I have a mobile app that is generating large amounts of JSON (and some binary data in some cases but to a lesser extent) and compressing the requests saves a large amount of bandwidth. I am using Tomcat as my Servlet container. I am using Spring for it's MVC annotations primarily just to abstract away some of the JEE stuff into a much cleaner, annotation-based interface. I also use Jackson for auto (de)serialization.
I also use nginx, but I am not sure if thats where I want the decompression to take place. The nginx nodes simply balance the requests which are then distributed through the data center. It would be just as nice to keep it compressed until it actually got to the node that was going to process it.
Thanks in advance,
John
EDIT:
The discussion between myself and @DaSourcerer was really helpful for those that are curious about the state of things at the time of writing this.
I ended up implementing a solution of my own. Note that this specifies the branch "ohmage-3.0", but it will soon be merged into the master branch. You might want to check there to see if I have made any updates/fixes.
https://github.com/ohmage/server/blob/ohmage-3.0/src/org/ohmage/servlet/filter/DecompressionFilter.java
HTTP compression allows content to be compressed on the server before transmission to the client. For resources such as text this can significantly reduce the size of the response message, leading to reduced bandwidth requirements and download times.
If the client sends the same request but by compressing the request body, setting Content-Encoding: gzip there are two possible outcomes: If the server supports gzip decompression, it will be able to process the request and everything works as expected.
To compress the HTTP request body, you must attach the HTTP header indicating the sending of HTTP request body compressed in gzip format while sending the request message from the Web Service client. Implement the processing for attaching the HTTP header in the client application.
Gzip is a file format and software application used on Unix and Unix-like systems to compress HTTP content before it's served to a client.
It appears [Content-Encoding] is not a valid request header.
That is actually not quite true. As per RFC 2616, sec 14.11, Content-Encoding
is an entity header which means it can be applied on the entities of both, http responses and requests. Through the powers of multipart MIME messages, even selected parts of a request (or response) can be compressed.
However, webserver support for compressed request bodies is rather slim. Apache supports it to a degree via the mod_deflate
module. It's not entirely clear to me if nginx can handle compressed requests.
Because the original code is not available any more. In case someone come here need it. I use "Content-Encoding: gzip" to identify the filter need to decompression or not.
Here's the codes.
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException
{
HttpServletRequest httpServletRequest = (HttpServletRequest) request;
String contentEncoding = httpServletRequest.getHeader("Content-Encoding");
if (contentEncoding != null && contentEncoding.indexOf("gzip") > -1)
{
try
{
final InputStream decompressStream = StreamHelper.decompressStream(httpServletRequest.getInputStream());
httpServletRequest = new HttpServletRequestWrapper(httpServletRequest)
{
@Override
public ServletInputStream getInputStream() throws IOException
{
return new DecompressServletInputStream(decompressStream);
}
@Override
public BufferedReader getReader() throws IOException
{
return new BufferedReader(new InputStreamReader(decompressStream));
}
};
}
catch (IOException e)
{
mLogger.error("error while handling the request", e);
}
}
chain.doFilter(httpServletRequest, response);
}
Simple ServletInputStream wrapper class
public static class DecompressServletInputStream extends ServletInputStream
{
private InputStream inputStream;
public DecompressServletInputStream(InputStream input)
{
inputStream = input;
}
@Override
public int read() throws IOException
{
return inputStream.read();
}
}
Decompression stream code
public class StreamHelper
{
/**
* Gzip magic number, fixed values in the beginning to identify the gzip
* format <br>
* http://www.gzip.org/zlib/rfc-gzip.html#file-format
*/
private static final byte GZIP_ID1 = 0x1f;
/**
* Gzip magic number, fixed values in the beginning to identify the gzip
* format <br>
* http://www.gzip.org/zlib/rfc-gzip.html#file-format
*/
private static final byte GZIP_ID2 = (byte) 0x8b;
/**
* Return decompression input stream if needed.
*
* @param input
* original stream
* @return decompression stream
* @throws IOException
* exception while reading the input
*/
public static InputStream decompressStream(InputStream input) throws IOException
{
PushbackInputStream pushbackInput = new PushbackInputStream(input, 2);
byte[] signature = new byte[2];
pushbackInput.read(signature);
pushbackInput.unread(signature);
if (signature[0] == GZIP_ID1 && signature[1] == GZIP_ID2)
{
return new GZIPInputStream(pushbackInput);
}
return pushbackInput;
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With