As I started to learn scrapy, i have come accross a requirement to dynamically build the Item attributes. I'm just scraping a webpage which has a table structure and I wanted to form the item and field attributes while crawling. I have gone through this example Scraping data without having to explicitly define each field to be scraped but couldn't make much of it.
Should I be writing an item pipleline to capture the info dynamically. I have also looked at Item loader function, but if anyone can explain in detail, it will be really helpful.
Note that in place of Firefox you can use other webdrivers like Chrome or Safari. There is also an option to use a headless PhantomJS browser. You can also combine Scrapy with Selenium if needed, see: selenium with scrapy for dynamic page.
Some webpages show the desired data when you load them in a web browser. However, when you download them using Scrapy, you cannot reach the desired data using selectors. When this happens, the recommended approach is to find the data source and extract the data from it.
When you want to scrape javascript generated content from a website you will realize that Scrapy or other web scraping libraries cannot run javascript code while scraping. First, you should try to find a way to make the data visible without executing any javascript code.
While working with Scrapy, one needs to create scrapy project. In Scrapy, always try to create one spider which helps to fetch data, so to create one, move to spider folder and create one python file over there. Create one spider with name gfgfetch.py python file. Move to the spider folder and create gfgfetch.py .
Just use a single Field as an arbitrary data placeholder. And then when you want to get the data out, instead of saying for field in item
, you say for field in item['row']
. You don't need pipelines or loaders to accomplish this task, but they are both used extensively for good reason: they are worth learning.
spider:
from scrapy.item import Item, Field
from scrapy.spider import BaseSpider
class TableItem(Item):
row = Field()
class TestSider(BaseSpider):
name = "tabletest"
start_urls = ('http://scrapy.org?finger', 'http://example.com/toe')
def parse(self, response):
item = TableItem()
row = dict(
foo='bar',
baz=[123, 'test'],
)
row['url'] = response.url
if 'finger' in response.url:
row['digit'] = 'my finger'
row['appendage'] = 'hand'
else:
row['foot'] = 'might be my toe'
item['row'] = row
return item
outptut:
stav@maia:/srv/stav/scrapie/oneoff$ scrapy crawl tabletest
2013-03-14 06:55:52-0600 [scrapy] INFO: Scrapy 0.17.0 started (bot: oneoff)
2013-03-14 06:55:52-0600 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'oneoff.spiders', 'SPIDER_MODULES': ['oneoff.spiders'], 'USER_AGENT': 'Chromium OneOff 24.0.1312.56 Ubuntu 12.04 (24.0.1312.56-0ubuntu0.12.04.1)', 'BOT_NAME': 'oneoff'}
2013-03-14 06:55:53-0600 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-03-14 06:55:53-0600 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-03-14 06:55:53-0600 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-03-14 06:55:53-0600 [scrapy] DEBUG: Enabled item pipelines:
2013-03-14 06:55:53-0600 [tabletest] INFO: Spider opened
2013-03-14 06:55:53-0600 [tabletest] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-03-14 06:55:53-0600 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-03-14 06:55:53-0600 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-03-14 06:55:53-0600 [tabletest] DEBUG: Crawled (200) <GET http://scrapy.org?finger> (referer: None)
2013-03-14 06:55:53-0600 [tabletest] DEBUG: Scraped from <200 http://scrapy.org?finger>
{'row': {'appendage': 'hand',
'baz': [123, 'test'],
'digit': 'my finger',
'foo': 'bar',
'url': 'http://scrapy.org?finger'}}
2013-03-14 06:55:53-0600 [tabletest] DEBUG: Redirecting (302) to <GET http://www.iana.org/domains/example/> from <GET http://example.com/toe>
2013-03-14 06:55:53-0600 [tabletest] DEBUG: Redirecting (302) to <GET http://www.iana.org/domains/example> from <GET http://www.iana.org/domains/example/>
2013-03-14 06:55:53-0600 [tabletest] DEBUG: Crawled (200) <GET http://www.iana.org/domains/example> (referer: None)
2013-03-14 06:55:53-0600 [tabletest] DEBUG: Scraped from <200 http://www.iana.org/domains/example>
{'row': {'baz': [123, 'test'],
'foo': 'bar',
'foot': 'might be my toe',
'url': 'http://www.iana.org/domains/example'}}
2013-03-14 06:55:53-0600 [tabletest] INFO: Closing spider (finished)
2013-03-14 06:55:53-0600 [tabletest] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1066,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 3833,
'downloader/response_count': 4,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/302': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 3, 14, 12, 55, 53, 848735),
'item_scraped_count': 2,
'log_count/DEBUG': 13,
'log_count/INFO': 4,
'response_received_count': 2,
'scheduler/dequeued': 4,
'scheduler/dequeued/memory': 4,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'start_time': datetime.datetime(2013, 3, 14, 12, 55, 53, 99635)}
2013-03-14 06:55:53-0600 [tabletest] INFO: Spider closed (finished)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With