I am trying to scrape a few pages of a website with selenium and use the results but when I run the function twice
[WinError 10061] No connection could be made because the target machine actively refused it'
Error appears for the 2nd function call. Here's my approach :
import os
import re
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup as soup
opts = webdriver.ChromeOptions()
opts.binary_location = os.environ.get('GOOGLE_CHROME_BIN', None)
opts.add_argument("--headless")
opts.add_argument("--disable-dev-shm-usage")
opts.add_argument("--no-sandbox")
browser = webdriver.Chrome(executable_path="CHROME_DRIVER PATH", options=opts)
lst =[]
def search(st):
for i in range(1,3):
url = "https://gogoanime.so/anime-list.html?page=" + str(i)
browser.get(url)
req = browser.page_source
sou = soup(req, "html.parser")
title = sou.find('ul', class_ = "listing")
title = title.find_all("li")
for j in range(len(title)):
lst.append(title[j].getText().lower()[1:])
browser.quit()
print(len(lst))
search("a")
search("a")
OUTPUT
272
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=58408): Max retries exceeded with url: /session/4b3cb270d1b5b867257dcb1cee49b368/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001D5B378FA60>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
Requests wraps the exception for the users convenience. The original exception is part of the message displayed. Requests never retries (it sets the retries=0 for urllib3's HTTPConnectionPool ), so the error would have been much more canonical without the MaxRetryError and HTTPConnectionPool keywords.
This error message... ...implies that the failed to establish a new connection raising MaxRetryError as no connection could be made. First and foremost as per the discussion max-retries-exceeded exceptions are confusing the traceback is somewhat misleading.
First and foremost as per the discussion max-retries-exceeded exceptions are confusing the traceback is somewhat misleading. Requests wraps the exception for the users convenience.
This error message...
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=58408): Max retries exceeded with url: /session/4b3cb270d1b5b867257dcb1cee49b368/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001D5B378FA60>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
...implies that the failed to establish a new connection raising MaxRetryError as no connection could be made.
A couple of things:
First and foremost as per the discussion max-retries-exceeded exceptions are confusing the traceback is somewhat misleading. Requests wraps the exception for the users convenience. The original exception is part of the message displayed.
Requests never retries (it sets the retries=0
for urllib3's HTTPConnectionPool
), so the error would have been much more canonical without the MaxRetryError and HTTPConnectionPool keywords. So an ideal Traceback would have been:
ConnectionError(<class 'socket.error'>: [Errno 1111] Connection refused)
Once you have initiated the webdriver and web client session, next within def search(st)
you are invoking get()
o access an url and in the subsequent lines you are also invoking browser.quit()
which is used to call the /shutdown
endpoint and subsequently the webdriver & the web-client instances are destroyed completely closing all the pages/tabs/windows. Hence no more connection exists.
You can find a couple of relevant detailed discussion in:
- PhantomJS web driver stays in memory
- Selenium : How to stop geckodriver process impacting PC memory, without calling driver.quit()?
In such a situation in the next iteration (due to the for
loop) when browser.get()
is invoked there are no active connections. hence you see the error.
So a simple solution would be to remove the line browser.quit()
and invoke browser.get(url)
within the same browsing context.
Once you upgrade to Selenium 3.14.1 you will be able to set the timeout and see canonical Tracebacks and would be able to take required action.
You can find a relevant detailed discussion in:
A couple of relevent discussions:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With