In Google Chrome, I can view http headers sent and received by the browser. An interesting option is you can copy the whole request/response as a cURL cmd.
The problem is, each time I run the produced cURL command, although the server sends a HTTP/1.1 200 OK, but the content-length is zero which is not the case when the same requests are sent by the browser itself.
What do you thing could be the problem?
Sounds like an HTTP server with defense against DDoS attack.
The HTTP server might expect each request to contain something specific the to time and sequence of the operation. It might reside in headers or body as a special token.
My guess is each time the server may respond with a different cookie value, which makes sending 2 requests with the same cookie value invalid. Or it may be something using UTC time as parameter. Anyway there must be something in your request that doesn't pass the server side filtering.
--
Or similarly it may be a resource for one-off use. E.g. a temporary file prepared for one-off download. Once it's accessed, the file is not available anymore.
I would add this as a comment, but I don't have the reputation.
I have just had an experience with this and what fixed it was surprising. In my situation I was logging into a server so I could upload a file, have the server do work on it, and then download the new file. I did this in Chrome first and used the dev tools to capture over 100 HTTP requests in this simple transaction. Most are simply grabbing resources I don't need if I am trying to do all of this from the command line, so I filtered out only the ones I knew at a minimum I should need.
Initially this boiled down to a GET to set the cookie and log in with a username and password, a POST to upload the file, a POST to execute the work on the file, and a GET to retrieve the new file. I could not get the first POST to actually work though. The response from that POST is supposed to be information containing the upload ID, time uploaded, etc, but instead I was getting empty JSON lists even though the status was 200 OK.
I used CURL to spoof the requests from the browser exactly (copying the User-Agent, overriding Expect, etc) and was still getting nothing. Then I started arbitrarily adding in some of the requests that I captured from Chrome between the first GET and POST, and low and behold after adding in a GET request for the JSON history before the POST the POST actually returned what it was supposed to.
TL;DR Some websites require more requests after the initial log in before you can POST. I would try to capture a successful exchange between the server and browser and look at all of the requests. Some requests might not be as superfluous as the seem.
If you send a GET request, and have some parameters, the url should be like:
http://www.example.com/login?user='abc'&pwd='123'
Make sure to use '\' to escape '&' in the url, the '&' is a special keyword to send process to background when using shell.
Otherwise, the above url will be truncated to:
http://www.example.com/login?user='abc'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With