Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

crawl site that has infinite scrolling using python

I have been doing research and so far I found out the python package that I will plan on using its scrapy, now I am trying to find out what is a good way to build a scraper using scrapy to crawl site with infinite scrolling. After digging around I found out that there is a package call selenium and it has python module. I have a feeling someone has already done that using Scrapy and Selenium to scrape site with infinite scrolling. It would be great if someone can point towards to an example.

like image 462
add-semi-colons Avatar asked Mar 28 '14 00:03

add-semi-colons


People also ask

How do you crawl an infinite scrolling page in Python?

You have got the skill to analyze web page and test code in Python shell. Below I've added the entire Scrapy spider code so you can learn if you are interested. You can put the file at scrapy_spider/spiders/infinite_scroll.py and then run command scrapy crawl infinite_scroll to run the Scrapy spider.

Does Google crawl infinite scroll?

The short answer is: Yes, Googlebot can crawl and index webpages that utilize infinite scrolling.


4 Answers

You can use selenium to scrap the infinite scrolling website like twitter or facebook.

Step 1 : Install Selenium using pip

pip install selenium 

Step 2 : use the code below to automate infinite scroll and extract the source code

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import sys

import unittest, time, re

class Sel(unittest.TestCase):
    def setUp(self):
        self.driver = webdriver.Firefox()
        self.driver.implicitly_wait(30)
        self.base_url = "https://twitter.com"
        self.verificationErrors = []
        self.accept_next_alert = True
    def test_sel(self):
        driver = self.driver
        delay = 3
        driver.get(self.base_url + "/search?q=stackoverflow&src=typd")
        driver.find_element_by_link_text("All").click()
        for i in range(1,100):
            self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
            time.sleep(4)
        html_source = driver.page_source
        data = html_source.encode('utf-8')


if __name__ == "__main__":
    unittest.main()

The for loop allows you to parse through the infinite scrolls and post which you can extract the loaded data.

Step 3 : Print the data if required.

like image 191
Pawan Kumar Avatar answered Oct 22 '22 11:10

Pawan Kumar


This is short & simple code which is working for me:

SCROLL_PAUSE_TIME = 20

# Get scroll height
last_height = driver.execute_script("return document.body.scrollHeight")

while True:
    # Scroll down to bottom
    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

    # Wait to load page
    time.sleep(SCROLL_PAUSE_TIME)

    # Calculate new scroll height and compare with last scroll height
    new_height = driver.execute_script("return document.body.scrollHeight")
    if new_height == last_height:
        break
    last_height = new_height

posts = driver.find_elements_by_class_name("post-text")

for block in posts:
    print(block.text)
like image 39
Sijin John Avatar answered Oct 22 '22 10:10

Sijin John


from selenium.webdriver.common.keys import Keys
import selenium.webdriver
driver = selenium.webdriver.Firefox()
driver.get("http://www.something.com")
lastElement = driver.find_elements_by_id("someId")[-1]
lastElement.send_keys(Keys.NULL)

This will open a page, find the bottom-most element with the given id and the scroll that element into view. You'll have to keep querying the driver to get the last element as the page loads more, and I've found this to be pretty slow as pages get large. The time is dominated by the call to driver.find_element_* because I don't know of a way to explicitly query the last element in the page.

Through experimentation you might find there is an upper limit to the amount of elements the page loads dynamically, and it would be best if you wrote something that loaded that number and only then made a call to driver.find_element_*.

like image 25
maxywb Avatar answered Oct 22 '22 11:10

maxywb


For infinite scrolling data are requested to Ajax calls. Open web browser --> network_tab --> clear previous requests history by clicking icon like stop--> scroll the webpage--> now you can find the new request for scroll event--> open the request header --> you can find the URL of request ---> copy and paste URL in an seperare tab--> you can find the result of Ajax call --> just form the requested URL to get the data page until end of the page

like image 20
Sanjeev Ravi Avatar answered Oct 22 '22 12:10

Sanjeev Ravi