Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

"Failed to decode response from marionette" message in Python/Firefox headless scraping script

Good Day, I've done a number of searches on here and google and yet to find a solution that address this problem.

The scenario is:

I have a Python script (2.7) that loops through an number of URLs (e.g. think Amazon pages, scraping reviews). Each page has the same HTML layout, just scraping different information. I use Selenium with a headless browser as these pages have javascript that needs to execute to grab the information.

I run this script on my local machine (OSX 10.10). Firefox is the latest v59. Selenium is version at 3.11.0 and using geckodriver v0.20.

This script locally has no issues, it can run through all the URLs and scrape the pages with no issue.

Now when I put the script on my server, the only difference is it is Ubuntu 16.04 (32 bit). I use the appropriate geckodriver (still v0.20) but everything else is the same (Python 2.7, Selenium 3.11). It appears to randomly crash the headless browser and then all of the browserObjt.get('url...') no longer work.

The error messages say:

Message: failed to decode response from marionette

Any further selenium requests for pages return the error:

Message: tried to run command without establishing a connection


To show some code:

When I create the driver:

    options = Options()
    options.set_headless(headless=True)

    driver = webdriver.Firefox(
        firefox_options=options,
        executable_path=config.GECKODRIVER
    )

driver is passed to the script's function as a parameter browserObj which is then used to call specific pages and then once that loads it is passed to BeautifulSoup for parsing:

browserObj.get(url)

soup = BeautifulSoup(browserObj.page_source, 'lxml')

The error might be pointing to the BeautifulSoup line which is crashing the browser.

What is likely causing this, and what can I do to resolve the issue?


Edit: Adding stack trace which points to the same thing:

Traceback (most recent call last):
  File "main.py", line 164, in <module>
    getLeague
  File "/home/ps/dataparsing/XXX/yyy.py", line 48, in BBB
    soup = BeautifulSoup(browserObj.page_source, 'lxml')
  File "/home/ps/AAA/projenv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 670, in page_source
    return self.execute(Command.GET_PAGE_SOURCE)['value']
  File "/home/ps/AAA/projenv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute
    self.error_handler.check_response(response)
  File "/home/ps/AAA/projenv/local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
WebDriverException: Message: Failed to decode response from marionette

Note: This script used to work with Chrome. Because the server is a 32bit server, I can only use chromedriver v0.33, which only supports Chrome v60-62. Currently Chrome is v65 and on DigitalOcean I don't seem to have an easy way to revert back to an old version - which is why I am stuck with Firefox.

like image 443
Reily Bourne Avatar asked Apr 09 '18 14:04

Reily Bourne


3 Answers

For anyone else experiencing this issue when running selenium webdriver in a Docker container, increasing the container size to 2gb fixes this issue.

I guess this affects physical machines too if the OP fixed their issue by upgrading their server RAM to 2Gb, but could be coincidence.

like image 111
myol Avatar answered Oct 11 '22 15:10

myol


I still don't know why this is happening but I may have found a work around. I read in some documentation there may be a race condition (on what, I am not sure since there shouldn't be two items competing for the same resources).

I changed the scraping code to do this:

import time

browserObj.get(url)

time.sleep(3)

soup = BeautifulSoup(browserObj.page_source, 'lxml')

No specific reason why I chose 3 seconds but since adding this delay I have not had the Message: failed to decode response from marionette error from any of my list of URLs to scrape.


Update: October, 2018

This continues to be an issue over six months later. Firefox, Geckodriver, Selenium and PyVirtualDisplay have all been updated to their latest versions. This error kept reoccurring spontaneously without pattern: sometimes working and sometimes not.

What fixed this issue is increasing RAM on my server from 1 GB to 2 GB. Since the increase there have been no failures of this sort.

like image 16
Reily Bourne Avatar answered Oct 11 '22 14:10

Reily Bourne


This error message...

Message: failed to decode response from marionette

...implies that the communication between GeckoDriver and Marionette was interrupted/broken.


Some of the reasons and solution for this issue are as follows:

  • In the discussion Crash during command execution results in "Internal Server Error: Failed to decode response from marionette" @whimboo mentions, while executing your tests Selenium may force a crash of the parent process of Firefox with an error as:

    DEBUG   <- 500 Internal Server Error {"value":{"error":"unknown error","message":"Failed to decode response from marionette","stacktrace":...}...}
    
    • Analysis: The current message is somewhat misleading and Geckdriver needs to handle this situation and reporting that the application has unexpectedly quit, in a better way. This issue is still open.
  • In the discussion Failed to decode response from marionette with Firefox >= 65 @rafagonc mentioned, this issue can occur when using GeckoDriver / FirefoxDriver or ChromeDriver in docker environment, due to presence of Zombie process that hangs even after invoking driver.quit(). At times, when you open many browsing instances one after another, your system may run out of memory or out of PIDs. See: Selenium using too much RAM with Firefox

    • As a solution @andreastt mentions, the following configuration should resolve the out of memory issue with Docker:

      --memory 1024mb --shm-size 2g
      

Steps: Configure SHM size in the docker container

  • Similarly, while executing your test in your localhost, it is advisable to keep the following (minimum) configuration:

    --memory 1024mb
    

Additional Considerations

This issue can also occur due to incompatibility between the version of the binaries you are using.

Solution:

  • Upgrade JDK to recent levels JDK 8u341.
  • Upgrade Selenium to current levels Version 3.141.59.
  • Upgrade GeckoDriver to GeckoDriver v0.26.0 level.
  • Upgrade Firefox version to Firefox v72.0 levels.
  • Execute your Test as a non-root user.

GeckoDriver, Selenium and Firefox Browser compatibility chart

GeckoDriverVersions


tl; dr

[e10s] Crash in libyuv::ARGBSetRow_X86


Reference

You can find a relevant detailed discussion in:

  • Browsing context has been discarded using GeckoDriver Firefox through Selenium
like image 7
undetected Selenium Avatar answered Oct 11 '22 16:10

undetected Selenium