Python: What happens if script stops while requests.get() is executing?

  • A+
Category:Languages

I know that requests.get() provides an HTTP interface so that the programmer can make various requests to a HTTP server.

That tells me that somewhere a port must be opened so that the request can happen.

Taking that into account, what would happen if the script is stopped (say, by a Key Board Interrupt, so the machine that is executing the script remains connected to the internet) before the request is answered/complete?

Would the port/connection remain opened?

Does the port/connection close automatically?

 


The short answer to the question is: requests will close a connection in the case of any exception, including KeyboardInterrupt and SystemExit.

A little digging into the requests source code reveals that requests.get ultimately calls the HTTPAdapter.send method (which is where all the magic happens).

There are two ways in which a request might be made within the send method: chunked or not chunked. Which send we perform depends on the value of the request.body and the Content-Length header:

chunked = not (request.body is None or 'Content-Length' in request.headers) 

In the case where the request body is None or the Content-Length is set, requests will make use of the high-level urlopen method of urllib3:

if not chunked:     resp = conn.urlopen(         method=request.method,         url=url,         body=request.body,         # ...     ) 

The finally block of the urllib3.PoolManager.urlopen method has code that handles closing the connection in the case where the try block didn't execute successfully:

clean_exit = False # ... try:     # ...     # Everything went great!     clean_exit = True finally:     if not clean_exit:         # We hit some kind of exception, handled or otherwise. We need         # to throw the connection away unless explicitly told not to.         # Close the connection, set the variable to None, and make sure         # we put the None back in the pool to avoid leaking it.         conn = conn and conn.close()         release_this_conn = True 

In the case where the response can be chunked, requests goes a bit lower level and uses the underlying low level connection provided by urllib3. In this case, requests still handles the exception, it does this with a try / except block that starts immediately after grabbing a connection, and finishes with:

low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)  try:     # ... except:     # If we hit any problems here, clean up the connection.     # Then, reraise so that we can handle the actual exception.     low_conn.close()     raise 

Interestingly the connection may not be closed if there are no errors, depending on how you have configured connection pooling for urllib3. In the case of a successful execution, the connection is put back into the connection pool (though I cannot find a _put_conn call in the requests source for the chunked send, which might be a bug in the chunked work-flow).

Comment

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: