Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python - selenium webdriver stuck at .get() in a loop

I have a Python code snippet that uses the Selenium Webdriver to loop through some historical Baseball odds. This first part of the code is intended to get all the individual Game URL's from the schedule table (consisting of around 57 pages that need to be looped) and store them in a list.

The first time I tested this it worked just fine - now, for whatever reason, the driver.get() function seems to not be working. What happens is that the webdriver initiates the first .get() method in the pageRange loop (page 2), but after that, in the next iteration of the loop it gets stuck and doesn't navigate to page 3. No error message or crash.

Some manual error checking using print() indicates that all other areas of the code are doing fine. What could be the potential reasons for this issue?

EDIT 1: The Code actually gets stuck immediately after the first .get() call and not before the second, as stated above. I also noticed that the .get() function works just well later in the code when looping through Game URL's. For some reason it is specifically the "http://www.oddsportal.com/baseball/usa/mlb-2017/results/#/page/2/", ""http://www.oddsportal.com/baseball/usa/mlb-2017/results/#/page/3/", etc, that it gets stuck on.

season = str(2017)

URL = "http://www.oddsportal.com/baseball/usa/mlb-" + season + "/results/#/"
chrome_path = r"C:\Users\dansl110\Dropbox\Betting Project/chromedriver.exe"

OddsList = pd.DataFrame(columns=["Date", "HomeTeam", "AwayTeam", "HomeOdds", 
"AwayOdds", "Accuracy"])

GameURLs = []
StartURL = 2

#Gets GameURLs and EndPage from Page 1
driver = webdriver.Chrome(chrome_path)
driver.get(URL)
elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
    link = elem.get_attribute("href")
    if "/results/#/page/" in link:
        EndURL = int(''.join(c for c in link if c in digits))
    elif "/mlb" in link and len(str(link)) > 58 and "results" not in link:
        GameURLs.append(link)

PageRange = range(StartURL, EndURL - 5)

#Gets remaining GameURLs
for page in PageRange:
    oldURL = URL
    URL = "http://www.oddsportal.com/baseball/usa/mlb-" + season + 
    "/results/#/page/" + str(page) + "/"
    #This .get() works only during the first iteration of the range loop
    driver.get(URL)
    time.sleep(3)
    elems = driver.find_elements_by_xpath("//a[@href]")
    for elem in elems:
        link = elem.get_attribute("href")
        if "/nhl" in link and len(str(link)) > 65 and "results" not in link:
            GameURLs.append(link)
like image 819
Daniel Slätt Avatar asked Feb 07 '18 14:02

Daniel Slätt


2 Answers

I had this same problem starting today. What I found was any of the machines that I had running versions 64.- of Chrome were having an intermittent hanging issue, but the machines running 63.- were not. go to chrome://settings/help and check which version: enter image description here

If you are running that version. try downloading the Chromedriver version here (2.35): https://sites.google.com/a/chromium.org/chromedriver/downloads

I tried this and it seemed to help a little with the hanging, but it still seems to be occurring.

The only thing that fixed it is going back to build 63.- for Chrome.

Hope it helps you.

EDIT:

I found this thread that will help! Add this to your script before you create the driver:

from selenium import webdriver

ChromeOptions = webdriver.ChromeOptions()
ChromeOptions.add_argument('--disable-browser-side-navigation')
driver = webdriver.Chrome('your/path/to/chromedriver.exe', chrome_options=ChromeOptions)

Once Chrome version 65.- comes out, it will be fixed. In the meantime, use the above if you are still on 64.-

like image 200
PixelEinstein Avatar answered Nov 05 '22 21:11

PixelEinstein


Try to move your driver definition into the loop. I had the same issue and it worked for me. It is slowing a little bit the code but at least it works.

like image 2
Stephane Avatar answered Nov 05 '22 21:11

Stephane