For an Emacs extension, I'd like to retrieve data over HTTP. I'm not particularly fond of the idea of shelling out to things like wget
, curl
, or w3m
to be able to do that, so I'm using the url-retrieve
function.
One of the HTTP servers i'm talking to happens to ignore Accept-Encoding
headers and insists on always sending out its data with Content-Encoding: gzip
.
As a result of that, and of the fact that url-retrieve
doesn't automatically decode response bodies, the buffer url-retrieve
will present me will contain binary gzip data.
I'm looking for a way to decode the response body, preferably chunk by chunk, as the data arrives. Is there a way to instruct url-retrieve
to do this for me?
Decoding the response all at once, once it completely arrived, would also be acceptable, but I'd rather avoid all the fubar involved in creating an asynchronous subprocess running gzip, piping parts of the response I got to that, and reading the decoded chunks back in - I'd be looking for some library function here.
What auto-compression-mode
does is run gzip on the file to be uncompressed. See for example jka-compr-insert-file-contents
in jka-compr.el
. So if you're going to use auto-compression-mode
to do the uncompression, you're going to need to write the response to a file first. For example, something like this:
(defun uncompress-callback (status)
(let ((filename (make-temp-file "download" nil ".gz")))
(search-forward "\n\n") ; Skip response headers.
(write-region (point) (point-max) filename)
(with-auto-compression-mode
(find-file filename))))
(url-retrieve "http://packages.ubuntu.com/hardy/allpackages?format=txt.gz"
#'uncompress-callback)
(If you don't want to create a temporary file, you'll have do your own subprocess management, but it's not as tricky as you imply in your question.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With