Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I perform a GET request without downloading the content?

I am working on a link checker, in general I can perform HEAD requests, however some sites seem to disable this verb, so on failure I need to also perform a GET request (to double check the link is really dead)

I use the following code as my link tester:

public class ValidateResult {   public HttpStatusCode? StatusCode { get; set; }   public Uri RedirectResult { get; set; }   public WebExceptionStatus? WebExceptionStatus { get; set; } }   public ValidateResult Validate(Uri uri, bool useHeadMethod = true,              bool enableKeepAlive = false, int timeoutSeconds = 30) {   ValidateResult result = new ValidateResult();    HttpWebRequest request = WebRequest.Create(uri) as HttpWebRequest;   if (useHeadMethod)   {     request.Method = "HEAD";   }   else   {     request.Method = "GET";   }    // always compress, if you get back a 404 from a HEAD it can be quite big.   request.AutomaticDecompression = DecompressionMethods.GZip;   request.AllowAutoRedirect = false;   request.UserAgent = UserAgentString;   request.Timeout = timeoutSeconds * 1000;   request.KeepAlive = enableKeepAlive;    HttpWebResponse response = null;   try   {     response = request.GetResponse() as HttpWebResponse;      result.StatusCode = response.StatusCode;     if (response.StatusCode == HttpStatusCode.Redirect ||       response.StatusCode == HttpStatusCode.MovedPermanently ||       response.StatusCode == HttpStatusCode.SeeOther)     {       try       {         Uri targetUri = new Uri(Uri, response.Headers["Location"]);         var scheme = targetUri.Scheme.ToLower();         if (scheme == "http" || scheme == "https")         {           result.RedirectResult = targetUri;         }         else         {           // this little gem was born out of http://tinyurl.com/18r            // redirecting to about:blank           result.StatusCode = HttpStatusCode.SwitchingProtocols;           result.WebExceptionStatus = null;         }       }       catch (UriFormatException)       {         // another gem... people sometimes redirect to http://nonsense:port/yay         result.StatusCode = HttpStatusCode.SwitchingProtocols;         result.WebExceptionStatus = WebExceptionStatus.NameResolutionFailure;       }      }   }   catch (WebException ex)   {     result.WebExceptionStatus = ex.Status;     response = ex.Response as HttpWebResponse;     if (response != null)     {       result.StatusCode = response.StatusCode;     }   }   finally   {     if (response != null)     {       response.Close();     }   }    return result; } 

This all works fine and dandy. Except that when I perform a GET request, the entire payload gets downloaded (I watched this in wireshark).

Is there any way to configure the underlying ServicePoint or the HttpWebRequest not to buffer or eager load the response body at all?

(If I were hand coding this I would set the TCP receive window really low, and then only grab enough packets to get the Headers, stop acking TCP packets as soon as I have enough info.)

for those wondering what this is meant to achieve, I do not want to download a 40k 404 when I get a 404, doing this a few hundred thousand times is expensive on the network

like image 529
Sam Saffron Avatar asked May 25 '12 03:05

Sam Saffron


People also ask

Can we used GET request instead of put request?

Can I use GET request instead of PUT to create resources? You can, but the only way to pass data in a GET request is by the URL itself.

Why do we need GET request?

The GET method is intended to be used to “retrieve whatever information (in the form of an entity) is identified by the Request-URI”. Especially, it is intended to be a safe and idempotent method. That means a GET request should not have side effects (i.e. changing data):


1 Answers

When you do a GET, the server will start sending data from the start of the file to the end. Unless you interrupt it. Granted, at 10 Mb/sec, that's going to be a megabyte per second so if the file is small you'll get the whole thing. You can minimize the amount you actually download in a couple of ways.

First, you can call request.Abort after getting the response and before calling response.close. That will ensure that the underlying code doesn't try to download the whole thing before closing the response. Whether this helps on small files, I don't know. I do know that it will prevent your application from hanging when it's trying to download a multi-gigabyte file.

The other thing you can do is request a range, rather than the entire file. See the AddRange method and its overloads. You could, for example, write request.AddRange(512), which would download only the first 512 bytes of the file. This depends, of course, on the server supporting range queries. Most do. But then, most support HEAD requests, too.

You'll probably end up having to write a method that tries things in sequence:

  • try to do a HEAD request. If that works (i.e. doesn't return a 500), then you're done
  • try GET with a range query. If that doesn't return a 500, then you're done.
  • do a regular GET, with a request.Abort after GetResponse returns.
like image 84
Jim Mischel Avatar answered Sep 24 '22 10:09

Jim Mischel