I have this code for scrapy framework:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.contrib.spiders import Rule
from scrapy.linkextractors import LinkExtractor
from lxml import html
class Scrapy1Spider(scrapy.Spider):
name = "scrapy1"
allowed_domains = ["sfbay.craigslist.org"]
start_urls = (
'http://sfbay.craigslist.org/search/npo',
)
rules = (Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="button next"]',)), callback="parse", follow= True),)
def parse(self, response):
site = html.fromstring(response.body_as_unicode())
titles = site.xpath('//div[@class="content"]/p[@class="row"]')
print len(titles), 'AAAA'
But problem is that i get 100 results, it doesn't go to next pages.
What is wrong here?
Run the code with scrapy crawl spider -o next_page. json and check the result.
Scrapy is an application framework for writing web spiders that crawl web sites and extract data from them. Scrapy provides a built-in mechanism for extracting data (called selectors) but you can easily use BeautifulSoup (or lxml) instead, if you feel more comfortable working with them.
Spiders are classes which define how a certain site (or a group of sites) will be scraped, including how to perform the crawl (i.e. follow links) and how to extract structured data from their pages (i.e. scraping items).
Your rule
is not used because you don't use a CrawlSpider
.
So you have to create the next page requests
manually like so:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.contrib.spiders import Rule
from scrapy.linkextractors import LinkExtractor
from lxml import html
class Scrapy1Spider(scrapy.Spider):
name = "craiglist"
allowed_domains = ["sfbay.craigslist.org"]
start_urls = (
'http://sfbay.craigslist.org/search/npo',
)
Rules = (Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="button next"]',)), callback="parse", follow= True),)
def parse(self, response):
site = html.fromstring(response.body_as_unicode())
titles = site.xpath('//div[@class="content"]/p[@class="row"]')
print len(titles), 'AAAA'
# follow next page links
next_page = response.xpath('.//a[@class="button next"]/@href').extract()
if next_page:
next_href = next_page[0]
next_page_url = 'http://sfbay.craigslist.org' + next_href
request = scrapy.Request(url=next_page_url)
yield request
Or use the CrawlSpider
like so:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from lxml import html
class Scrapy1Spider(CrawlSpider):
name = "craiglist"
allowed_domains = ["sfbay.craigslist.org"]
start_urls = (
'http://sfbay.craigslist.org/search/npo',
)
rules = (Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="button next"]',)), callback="parse_page", follow= True),)
def parse_page(self, response):
site = html.fromstring(response.body_as_unicode())
titles = site.xpath('//div[@class="content"]/p[@class="row"]')
print len(titles), 'AAAA'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With