Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrapy-Recursively Scrape Webpages and save content as html file

Tags:

scrapy

I am using scrapy to extract the information in tag of web pages and then save those webpages as HTML files.Eg http://www.austlii.edu.au/au/cases/cth/HCA/1945/ this site has some webpages related to judicial cases.I want to go to each link and save only the content related to the particular judicial case as an HTML page.eg go to this http://www.austlii.edu.au/au/cases/cth/HCA/1945/1.html and then save information related to case.

Is there a way to do this recursively in scrapy and save content in HTML page

like image 448
Ashmit Avatar asked Dec 21 '25 06:12

Ashmit


1 Answers

Yes, you can do it with Scrapy, Link Extractors will help:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector


class AustliiSpider(CrawlSpider):
    name = "austlii"
    allowed_domains = ["austlii.edu.au"]
    start_urls = ["http://www.austlii.edu.au/au/cases/cth/HCA/1945/"]
    rules = (
        Rule(SgmlLinkExtractor(allow=r"au/cases/cth/HCA/1945/\d+.html"), follow=True, callback='parse_item'),
    )

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)

        # do whatever with html content (response.body variable)

Hope that helps.

like image 53
alecxe Avatar answered Dec 24 '25 10:12

alecxe