I am writing a small web scraper which gets a big number of images from a specific site. However, the IO speed was slow so I googled and found asyncio and aiohttp to deal with IO bound operation overhead. I combed the aiohttp docs but couldn't find any function that looks like an alternative to iter_content() in requests module. I need it to write the image data to disk. Can anyone help?
You should use the ClientResponse.content
attribute. It's a StreamReader
instance and can be used for reading the response incrementally. From the docs:
with open(filename, 'wb') as fd:
while True:
chunk = await r.content.read(chunk_size)
if not chunk:
break
fd.write(chunk)
StreamReader
also supports async iteration:
async for line in r.content:
...
async for chunk in r.content.iter_chunked(1024):
...
async for slice in r.content.iter_any(): # as much as possible before blocking
...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With