Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do we still need parser like BeautifulSoup if we can use Selenium?

I am currently using Selenium to crawl data from some websites. Unlike urllib, it seems that I do not really need a parser like BeautifulSoup to parse the HTML. I can simply find an element with Selenium and use Webelement.text to get the data that I need. As I saw there are some people using Selenium and BeautifulSoup together in web crawling. Is it really necessary? Any special features that bs4 can offer to improve the crawling process? Thank you.

like image 264
jackycflau Avatar asked Apr 02 '17 03:04

jackycflau


1 Answers

Selenium itself is quite powerful in terms of locating elements and, it basically has everything you need for extracting data from HTML. The problem is, it is slow. Every single selenium command goes through the JSON wire HTTP protocol and there is a substantial overhead.

In order to improve the performance of the HTML parsing part, it is usually much faster to let BeautifulSoup or lxml parse the page source retrieved from .page_source.


In other words, a common workflow for a dynamic web page is something like:

  • open the page in a browser controlled by selenium
  • make the necessary browser actions
  • once the desired data is on the page, get the driver.page_source and close the browser
  • pass the page source to an HTML parser for further parsing
like image 185
alecxe Avatar answered Sep 29 '22 08:09

alecxe