I want to use the Python Scrapy module to scrape all the URLs from my website and write the list to a file. I looked in the examples but didn't see any simple example to do this.
You have to run a crawler on the web page using the fetch command in the Scrapy shell. A crawler or spider goes through a webpage downloading its text and metadata.
Here's the python program that worked for me:
from scrapy.selector import HtmlXPathSelector from scrapy.spider import BaseSpider from scrapy.http import Request DOMAIN = 'example.com' URL = 'http://%s' % DOMAIN class MySpider(BaseSpider): name = DOMAIN allowed_domains = [DOMAIN] start_urls = [ URL ] def parse(self, response): hxs = HtmlXPathSelector(response) for url in hxs.select('//a/@href').extract(): if not ( url.startswith('http://') or url.startswith('https://') ): url= URL + url print url yield Request(url, callback=self.parse)
Save this in a file called spider.py
.
You can then use a shell pipeline to post process this text:
bash$ scrapy runspider spider.py > urls.out bash$ cat urls.out| grep 'example.com' |sort |uniq |grep -v '#' |grep -v 'mailto' > example.urls
This gives me a list of all the unique urls in my site.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With