Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrapy read list of URLs from file to scrape?

Tags:

python

scrapy

I've just installed scrapy and followed their simple dmoz tutorial which works. I just looked up basic file handling for python and tried to get the crawler to read a list of URL's from a file but got some errors. This is probably wrong but I gave it a shot. Would someone please show me an example of reading a list of URL's into scrapy? Thanks in advance.

from scrapy.spider import BaseSpider

class DmozSpider(BaseSpider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    f = open("urls.txt")
    start_urls = f

    def parse(self, response):
        filename = response.url.split("/")[-2]
        open(filename, 'wb').write(response.body)
like image 981
Anagio Avatar asked Dec 04 '11 16:12

Anagio


1 Answers

You were pretty close.

f = open("urls.txt")
start_urls = [url.strip() for url in f.readlines()]
f.close()

...better still would be to use the context manager to ensure the file's closed as expected:

with open("urls.txt", "rt") as f:
    start_urls = [url.strip() for url in f.readlines()]
like image 151
Brian Cain Avatar answered Oct 05 '22 19:10

Brian Cain