Im trying to read large website data but im facing this MemoryError exception
import requests
requests.urllib3.disable_warnings()
search_page = "http://www.yachtworld.co.uk/core/listing/cache/searchResults.jsp?ps=99999"
y = requests.get(search_page, timeout=999999, stream=True)
result = y.text
I face MemoryError
Exception when i try to read from result
variable which is the output of the page,
Is there anyway to read the whole data without facing this exception,
Thanks.
From what I know there has not been any changes to the problem - meaning no possibility, you can load the data in chunks like well presented here
The accepted answer from the link I provided states a quite good piece of code for chunking the response:
def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter
r = requests.get(url, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
#f.flush() commented by recommendation from J.F.Sebastian
return local_filename
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With