Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrape websites using scrapy

I am trying to scrape a website with scrapy but I am having problem with scraping the all products from this site as it is using endless scrolling...

I can scrape only below data for 52 items only but their are 3824 items.

hxs.select("//span[@class='itm-Catbrand strong']").extract()
hxs.select("//span[@class='itm-price ']").extract()
hxs.select("//span[@class='itm-title']").extract()

If I use hxs.select("//div[@id='content']/div/div/div").extract() Then it extracts whole items list but it won't filter further....How do I scrape all the items?

I have tried this but same result. Where am I wrong?

def parse(self, response):
    filename = response.url.split("/")[-2]
    open(filename, 'wb').write(response.body
    for n in [2,3,4,5,6]:            
    req = Request(url="http://www.jabong.com/men/shoes/?page=" + n,
                      headers = {"Referer": "http://www.jabong.com/men/shoes/",
                                 "X-Requested-With": response.header['X-Requested-With']})
    return req 
like image 362
Vaibhav Jain Avatar asked May 15 '13 08:05

Vaibhav Jain


1 Answers

As you have guessed, this website uses javascript to load more items when you scroll the page.

Using the developers tools included in my browser (Ctrl-Maj i for chromium), I saw in the Network tab that the javascript script included in the page performs the following requests to load more items :

GET http://www.website-your-are-crawling.com/men/shoes/?page=2 # 2,3,4,5,6 etc...

The web server responds with documents of the following type :

<li id="PH969SH70HPTINDFAS" class="itm hasOverlay unit size1of4 ">
  <div id="qa-quick-view-btn" class="quickviewZoom itm-quickview ui-buttonQuickview l-absolute pos-t" title="Quick View" data-url ="phosphorus-Black-Moccasins-233629.html" data-sku="PH969SH70HPTINDFAS" onClick="_gaq.push(['_trackEvent', 'BadgeQV','Shown','OFFER INSIDE']);">Quick view</div>

                                    <div class="itm-qlInsert tooltip-qlist  highlightStar"
                     onclick="javascript:Rocket.QuickList.insert('PH969SH70HPTINDFAS', 'catalog');
                                             return false;" >
                                              <div class="starHrMsg">
                         <span class="starHrMsgArrow">&nbsp;</span>
                         Save for later                         </div>
                                        </div>
                <a id='cat_105_PH969SH70HPTINDFAS' class="itm-link sobrTxt" href="/phosphorus-Black-Moccasins-233629.html" 
                                    onclick="fireGaq('_trackEvent', 'Catalog to PDP', 'men--Shoes--Moccasins', 'PH969SH70HPTINDFAS--1699.00--', this),fireGaq('_trackEvent', 'BadgePDP','Shown','OFFER INSIDE', this);">
                    <span class="lazyImage">
                        <span style="width:176px;height:255px;" class="itm-imageWrapper itm-imageWrapper-PH969SH70HPTINDFAS" id="http://static4.jassets.com/p/Phosphorus-Black-Moccasins-6668-926332-1-catalog.jpg" itm-img-width="176" itm-img-height="255" itm-img-sprites="4">
                            <noscript><img src="http://static4.jassets.com/p/Phosphorus-Black-Moccasins-6668-926332-1-catalog.jpg" width="176" height="255" class="itm-img"></noscript>
                        </span>                            
                    </span>

                                            <span class="itm-budgeFlag offInside"><span class="flagBrdLeft"></span>OFFER INSIDE</span>                       
                                            <span class="itm-Catbrand strong">Phosphorus</span>
                    <span class="itm-title">
                                                                                Black Moccasins                        </span>

These documents contain more items.

So, to get the full list of items you will have to return Request objects in the parse method of your Spider (See the Spider class documentation), to tell scrapy that it should load more data :

def parse(self, response):
    # ... Extract items in the page using extractors
    n = number of the next "page" to parse
    # You get get n by using response.url, extracting the number
    # at the end and adding 1

    # It is VERY IMPORTANT to set the Referer and X-Requested-With headers
    # here because that's how the website detects if the request was made by javascript
    # or direcly by following a link.
    req = Request(url="http://www.website-your-are-crawling.com/men/shoes/?page=" + n,
       headers = {"Referer": "http://www.website-your-are-crawling.com/men/shoes/",
          "X-Requested-With": "XMLHttpRequest"})
    return req # and your items

Oh, and by the way (in case you want to test), you can't just load http://www.website-your-are-crawling.com/men/shoes/?page=2 in your browser to see what it returns because the website will redirect you to the global page (ie http://www.website-your-are-crawling.com/men/shoes/) if the X-Requested-With header is different from XMLHttpRequest.

like image 74
Xion345 Avatar answered Oct 07 '22 06:10

Xion345