I am trying to make an app that might live for a day, a week or longer. Dyring the app's lifetime, it will make requests to different API's. Some of these apis might require log in, so it is important that i have access to cookies at all times.
So what i need is a file that the different API's can use without blocking the app.
I am new to asynchronous programming(asyncio/aiohttp) and examples i have seen, shows how to make a lot of requests from a list of url's, but this is not what i need.
The problem with the code i have is, either i get ClientSession is closed error or unclosed ClientSession warnings.
import asyncio # only here for debugging purposes
import aiohttp
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:61.0) Gecko/20100101 Firefox/61.1'
def default_headers():
header = {
'User-Agent': USER_AGENT
}
return header
class WebSession(object):
session = None
@classmethod
def create(cls):
cls.session = aiohttp.ClientSession()
return cls.session
@classmethod
def close(cls):
if cls.session is not None:
cls.session.close()
async def request(method, url, **kwargs):
if kwargs.get('headers', None) is None:
kwargs['headers'] = default_headers()
if WebSession.session is None:
session = WebSession.create()
else:
session = WebSession.session
async with session.request(method=method, url=url, **kwargs) as response:
if isinstance(session, aiohttp.ClientSession):
# if i close the session here, i will get the ClientSession closed error on 2. request.
# await session.close()
pass
return response
async def get(url, **kwargs):
return await request('GET', url=url, **kwargs)
async def post(url, **kwargs):
return await request('POST', url=url, **kwargs)
async def get_url():
res = await get('https://httpbin.org/get')
print(f'Status code: {res.headers}')
m_loop = asyncio.get_event_loop()
m_loop.run_until_complete(get_url())
# if i run this without closing the ClientSession, i will get unclosed ClientSession warnings.
m_loop.run_until_complete(get_url())
m_loop.close()
I do get a response from the server, however it is followed by this error/warning
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x03354630>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x033BBBF0>, 71.542)]']
connector: <aiohttp.connector.TCPConnector object at 0x033542D0>
If i uncomment the await session.close()
and remove the pass
i get a response from the server in the first request, followed by RuntimeError: Session is closed
in the second request.
By default the aiohttp. ClientSession object will hold a connector with a maximum of 100 connections, putting the rest in a queue. This is quite a big number, this means you must be connected to a hundred different servers (not pages!) concurrently before even having to consider if your task needs resource adjustment.
get is that requests fetches the whole body of the response at once and remembers it, but aiohttp doesn't. aiohttp lets you ignore the body, or read it in chunks, or read it after looking at the headers/status code. That's why you need to do a second await : aiohttp needs to do more I/O to get the response body.
If you need to add HTTP headers to a request, pass them in a dict to the headers parameter. await session. post(url, data='Привет, Мир! ')
It creates a coroutine function which gathers the URL of everything we want to download. In download_coroutine, it creates a context manager that runs for about X seconds. Once that X amount of seconds runs out, the context manager runs out or ends. In this aiohttp example, the timer is on for 10 seconds.
Ahh, i think i got it now.
The warnings i got Unclosed client session
and Unclosed connector
was aiohttp telling me "hey, you forgot to close the session".
And this is exactly what happened with this small example. Both calls to get_url
would actually get a response from the server, and then the app would end.
So the session would then be left in an unclosed state when the app ended, which is why the abover warnings were shown.
I was not supposed to close the session after each request, since there would be no way of making a new request at that point, atleast not to my knowledge.
And that is why i got RuntimeError: Session is closed
when trying to make a new request, once it was already closed.
So once i figured this out, i then created a close function, and simply called this before the loop(app) ended. Now i get no warnings/errors. And cookies are now shared between all request made(i think) while the app is running. Whether they be GET or POST, and that was exactly what i wanted.
I hope that someone else new to aiohttp/asyncio will benefit from this, as it took me some time(to long) to understand. As i am still new to aiohttp/asyncio, i don't know if this is the correct way of doing it, but at least it seems to work.
import asyncio # only here for debugging purposes
import aiohttp
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:61.0) Gecko/20100101 Firefox/61.1'
def default_headers():
header = {
'User-Agent': USER_AGENT
}
return header
class WebSession(object):
session = None
@classmethod
def create(cls):
cls.session = aiohttp.ClientSession()
return cls.session
@classmethod
def close(cls):
if cls.session is not None:
# apparently this is supposed to return a future?
return cls.session.close()
async def request(method, url, **kwargs):
if kwargs.get('headers', None) is None:
kwargs['headers'] = default_headers()
if WebSession.session is None:
session = WebSession.create()
else:
session = WebSession.session
return await session.request(method=method, url=url, **kwargs)
async def get(url, **kwargs):
return await request('GET', url=url, **kwargs)
async def post(url, **kwargs):
return await request('POST', url=url, **kwargs)
async def get_url():
res = await get('https://httpbin.org/get')
print(f'Headers: {res.headers}')
async def close():
# run this before the app ends
await WebSession.close()
# so imagine that this is our app.
m_loop = asyncio.get_event_loop()
# its running now and doing stuff..
# then it makes a request to a url.
m_loop.run_until_complete(get_url())
# then some time passes, and then it makes another request to a url.
m_loop.run_until_complete(get_url())
# now the app gets stopped, whether by keyboard interrupt or some other means of stopping it
# then close the session
m_loop.run_until_complete(close())
# and then end the app..
m_loop.close()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With