I know that requests.get()
provides an HTTP interface so that the programmer can make various requests to a HTTP server.
That tells me that somewhere a port must be opened so that the request can happen.
Taking that into account, what would happen if the script is stopped (say, by a Key Board Interrupt, so the machine that is executing the script remains connected to the internet) before the request is answered/complete?
Would the port/connection remain opened?
Does the port/connection close automatically?
The GET method has size limitation: only 1024 characters can be sent in a request string. The GET method sends information using QUERY_STRING header and will be accessible in your CGI Program through QUERY_STRING environment variable.
By default, requests do not have a timeout unless you explicitly specify one.
It will wait until the response arrives before the rest of your program will execute. If you want to be able to do other things, you will probably want to look at the asyncio or multiprocessing modules. Chad S. Chad S.
Timeouts in Python requests You can tell requests library to stop waiting for a response after a given amount of time by passing a number to the timeout parameter. If the requests library does not receive response in x seconds, it will raise a Timeout error.
The short answer to the question is: requests will close a connection in the case of any exception, including KeyboardInterrupt
and SystemExit
.
A little digging into the requests source code reveals that requests.get
ultimately calls the HTTPAdapter.send
method (which is where all the magic happens).
There are two ways in which a request might be made within the send
method: chunked or not chunked. Which send
we perform depends on the value of the request.body
and the Content-Length
header:
chunked = not (request.body is None or 'Content-Length' in request.headers)
In the case where the request body is None
or the Content-Length
is set, requests
will make use of the high-level urlopen
method of urllib3
:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
# ...
)
The finally
block of the urllib3.PoolManager.urlopen
method has code that handles closing the connection in the case where the try
block didn't execute successfully:
clean_exit = False
# ...
try:
# ...
# Everything went great!
clean_exit = True
finally:
if not clean_exit:
# We hit some kind of exception, handled or otherwise. We need
# to throw the connection away unless explicitly told not to.
# Close the connection, set the variable to None, and make sure
# we put the None back in the pool to avoid leaking it.
conn = conn and conn.close()
release_this_conn = True
In the case where the response can be chunked, requests goes a bit lower level and uses the underlying low level connection provided by urllib3
. In this case, requests still handles the exception, it does this with a try
/ except
block that starts immediately after grabbing a connection, and finishes with:
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
# ...
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
Interestingly the connection may not be closed if there are no errors, depending on how you have configured connection pooling for urllib3
. In the case of a successful execution, the connection is put back into the connection pool (though I cannot find a _put_conn
call in the requests
source for the chunked send
, which might be a bug in the chunked work-flow).
On a much lower level, when a program exits, the OS kernel closes all file descriptors opened by that program. These include network sockets.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With