Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I use the Python Scrapy module to list all the URLs from my website?

Tags:

I want to use the Python Scrapy module to scrape all the URLs from my website and write the list to a file. I looked in the examples but didn't see any simple example to do this.

like image 720
Adam F Avatar asked Mar 05 '12 02:03

Adam F


People also ask

Which command is used to crawl the data from the website using Scrapy library?

You have to run a crawler on the web page using the fetch command in the Scrapy shell. A crawler or spider goes through a webpage downloading its text and metadata.


1 Answers

Here's the python program that worked for me:

from scrapy.selector import HtmlXPathSelector from scrapy.spider import BaseSpider from scrapy.http import Request  DOMAIN = 'example.com' URL = 'http://%s' % DOMAIN  class MySpider(BaseSpider):     name = DOMAIN     allowed_domains = [DOMAIN]     start_urls = [         URL     ]      def parse(self, response):         hxs = HtmlXPathSelector(response)         for url in hxs.select('//a/@href').extract():             if not ( url.startswith('http://') or url.startswith('https://') ):                 url= URL + url              print url             yield Request(url, callback=self.parse) 

Save this in a file called spider.py.

You can then use a shell pipeline to post process this text:

bash$ scrapy runspider spider.py > urls.out bash$ cat urls.out| grep 'example.com' |sort |uniq |grep -v '#' |grep -v 'mailto' > example.urls 

This gives me a list of all the unique urls in my site.

like image 185
Adam F Avatar answered Sep 30 '22 13:09

Adam F