Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

python - web scraping an ajax website using BeautifulSoup

I am trying to scrape e-commerce site that uses ajax call to load its next pages.

I am able to scrape data present on page 1 but page 2 loads automatically through ajax call when I scroll page 1 to bottom.

My code :

from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as ureq
my_url='http://www.shopclues.com/mobiles-smartphones.html'
page=ureq(my_url).read()
page_soup=soup(page,"html.parser")
containers=page_soup.findAll("div",{"class":"column col3"})
for container in containers:
   name=container.h3.text
   price=container.find("span",{'class':'p_price'}).text
   print("Name : "+name.replace(","," "))
   print("Price : "+price)
for i in range(2,7):
    my_url="http://www.shopclues.com/ajaxCall/moreProducts?catId=1431&filters=&pageType=c&brandName=&start="+str(36*(i-1))+"&columns=4&fl_cal=1&page="+str(i)
    page=ureq(my_url).read()
    print(page)
    page_soup=soup(page,"html.parser")
    containers=page_soup.findAll("div",{"class":"column col3"})
    for container in containers:
        name=container.h3.text
        price=container.find("span",{'class':'p_price'}).text
        print("Name : "+name.replace(","," "))
        print("Price : "+price)

I have printed the ajax page read by ureq to know whether I am able to open the ajax page and I got an output as: enter image description here

b' ' are the outputs of: print(page)

please provide me a solution to scrape the remaining data.

like image 528
sachin rathod Avatar asked May 24 '17 17:05

sachin rathod


People also ask

Is Scrapy faster than BeautifulSoup?

Scrapy is incredibly fast. Its ability to send asynchronous requests makes it hands-down faster than BeautifulSoup. This means that you'll be able to scrape and extract data from many pages at once.

Is BeautifulSoup a web scraper?

BeautifulSoup is not a web scraping library per se. It is a library that allows you to efficiently and easily pull out information from HTML. In the real world, it is often used for web scraping projects. So, for starters, we need an HTML document.


1 Answers

from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as soup
from urllib2 import urlopen as ureq
import random
import time

chrome_options = webdriver.ChromeOptions()
prefs = {"profile.default_content_setting_values.notifications": 2}
chrome_options.add_experimental_option("prefs", prefs)

# A randomizer for the delay
seconds = 5 + (random.random() * 5)
# create a new Chrome session
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.implicitly_wait(30)
# driver.maximize_window()

# navigate to the application home page
driver.get("http://www.shopclues.com/mobiles-smartphones.html")
time.sleep(seconds)
time.sleep(seconds)
# Add more to range for more phones
for i in range(1):
    element = driver.find_element_by_id("moreProduct")
    driver.execute_script("arguments[0].click();", element)
    time.sleep(seconds)
    time.sleep(seconds)
html = driver.page_source
page_soup = soup(html, "html.parser")
containers = page_soup.findAll("div", {"class": "column col3"})
for container in containers:
# Add error handling
    try:
        name = container.h3.text
        price = container.find("span", {'class': 'p_price'}).text
        print("Name : " + name.replace(",", " "))
        print("Price : " + price)
    except AttributeError:
        continue
driver.quit()

I used selenium to load the website and click the button to load more results. Then take the resulting html and put in your code.

like image 118
ben taylor Avatar answered Nov 11 '22 21:11

ben taylor