Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Click a Button in Scrapy

I'm using Scrapy to crawl a webpage. Some of the information I need only pops up when you click on a certain button (of course also appears in the HTML code after clicking).

I found out that Scrapy can handle forms (like logins) as shown here. But the problem is that there is no form to fill out, so it's not exactly what I need.

How can I simply click a button, which then shows the information I need?

Do I have to use an external library like mechanize or lxml?

like image 674
naeg Avatar asked Jul 13 '11 16:07

naeg


People also ask

How do you click a button in Scrapy?

You cannot click a button with Scrapy. You can send requests & receive a response. It's upto you to interpret the response with a separate javascript engine.

Is Scrapy better than selenium?

Selenium is an excellent automation tool and Scrapy is by far the most robust web scraping framework. When we consider web scraping, in terms of speed and efficiency Scrapy is a better choice. While dealing with JavaScript based websites where we need to make AJAX/PJAX requests, Selenium can work better.

How do you use Scrapy in Python?

While working with Scrapy, one needs to create scrapy project. In Scrapy, always try to create one spider which helps to fetch data, so to create one, move to spider folder and create one python file over there. Create one spider with name gfgfetch.py python file. Move to the spider folder and create gfgfetch.py .


2 Answers

Scrapy cannot interpret javascript.

If you absolutely must interact with the javascript on the page, you want to be using Selenium.

If using Scrapy, the solution to the problem depends on what the button is doing.

If it's just showing content that was previously hidden, you can scrape the data without a problem, it doesn't matter that it wouldn't appear in the browser, the HTML is still there.

If it's fetching the content dynamically via AJAX when the button is pressed, the best thing to do is to view the HTTP request that goes out when you press the button using a tool like Firebug. You can then just request the data directly from that URL.

Do I have to use an external library like mechanize or lxml?

If you want to interpret javascript, yes you need to use a different library, although neither of those two fit the bill. Neither of them know anything about javascript. Selenium is the way to go.

If you can give the URL of the page you're working on scraping I can take a look.

like image 69
Acorn Avatar answered Sep 18 '22 13:09

Acorn


Selenium browser provide very nice solution. Here is an example (pip install -U selenium):

from selenium import webdriver  class northshoreSpider(Spider):     name = 'xxx'     allowed_domains = ['www.example.org']     start_urls = ['https://www.example.org']      def __init__(self):         self.driver = webdriver.Firefox()      def parse(self,response):             self.driver.get('https://www.example.org/abc')              while True:                 try:                     next = self.driver.find_element_by_xpath('//*[@id="BTN_NEXT"]')                     url = 'http://www.example.org/abcd'                     yield Request(url,callback=self.parse2)                     next.click()                 except:                     break              self.driver.close()      def parse2(self,response):         print 'you are here!' 
like image 31
Nima Soroush Avatar answered Sep 16 '22 13:09

Nima Soroush