I'm trying to follow this tutorial.
I want my desc
field to be a single string normalized to single spaces, and in uppercase.
dmoz_spider.py
import scrapy
from tutorial.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
for sel in response.xpath('//ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('text()').extract()
yield item
I tried declaring input/output processors according to http://doc.scrapy.org/en/latest/topics/loaders.html#declaring-input-and-output-processors
items.py
import scrapy
from scrapy.loader.processors import MapCompose, Join
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field(
input_processor=MapCompose(
lambda x: ' '.join(x.split()),
lambda x: x.upper()
),
output_processor=Join()
)
However, my output still turns out like this.
{'desc': ['\r\n\t\r\n ',
' \r\n'
'\t\t\t\r\n'
' - By David Mertz; Addison Wesley. '
'Book in progress, full text, ASCII format. Asks for feedback. '
'[author website, Gnosis Software, Inc.]\r\n'
' \r\n'
' ',
'\r\n '],
'link': ['http://gnosis.cx/TPiP/'],
'title': ['Text Processing in Python']}
What am I doing wrong?
I'm using Python 3.5.1 and Scrapy 1.1.0
I put up my entire code here: https://github.com/prashcr/scrapy_tutorial, so that you can try and modify it as you wish.
However, there is one more place where you can specify the input and output processors to use: in the Item Field metadata.
I suspect the documentation is misleading/wrong (or may be out of date?), because, according to the source code, the input_processor
field attribute is read only inside the ItemLoader
instance, which means that you need to use an Item Loader anyway.
You can use a built-in one and leave your DmozItem
definition as is:
from scrapy.loader import ItemLoader
class DmozSpider(scrapy.Spider):
# ...
def parse(self, response):
for sel in response.xpath('//ul/li'):
loader = ItemLoader(DmozItem(), selector=sel)
loader.add_xpath('title', 'a/text()')
loader.add_xpath('link', 'a/@href')
loader.add_xpath('desc', 'text()')
yield loader.load_item()
This way the input_processor
and output_processor
Item Field arguments would be taken into account and the processors would be applied.
Or you can define the processors inside a custom Item Loader instead of the Item
class:
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
class MyItemLoader(ItemLoader):
desc_in = MapCompose(
lambda x: ' '.join(x.split()),
lambda x: x.upper()
)
desc_out = Join()
And use it to load items in your spider:
def parse(self, response):
for sel in response.xpath('//ul/li'):
loader = MyItemLoader(DmozItem(), selector=sel)
loader.add_xpath('title', 'a/text()')
loader.add_xpath('link', 'a/@href')
loader.add_xpath('desc', 'text()')
yield loader.load_item()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With