I am using scrapy to crawl old sites that I own, I am using the code below as my spider. I don't mind having files outputted for each webpage, or a database with all the content within that. But I do need to be able to have the spider crawl the whole thing with out me having to put in every single url that I am currently having to do
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["www.example.com"]
start_urls = [
"http://www.example.com/contactus"
]
def parse(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename, 'wb') as f:
f.write(response.body)
All we have to do is tell the scraper to follow that link if it exists. First, we define a selector for the “next page” link, extract the first match, and check if it exists. The scrapy. Request is a value that we return saying “Hey, crawl this page”, and callback=self.
Scrapy is a more robust, feature-complete, more extensible, and more maintained web scraping tool. Scrapy allows you to crawl, extract, and store a full website. BeautilfulSoup on the other end only allows you to parse HTML and extract the information you're looking for.
To crawl whole site you should use the CrawlSpider instead of the scrapy.Spider
Here's an example
For your purposes try using something like this:
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class MySpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']
rules = (
Rule(LinkExtractor(), callback='parse_item', follow=True),
)
def parse_item(self, response):
filename = response.url.split("/")[-2] + '.html'
with open(filename, 'wb') as f:
f.write(response.body)
Also, take a look at this article
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With