Im trying to make a request for a parameter site_search_url
, but I get the following error when I run: ``:
start_requests = iter(self.spider.start_requests())
TypeError: 'NoneType' object is not iterable
code:
class BrickSetSpider(scrapy.Spider):
def __init__(self, site_search_url):
self.site_search_url = site_search_url
def start_requests(self):
se_base = 'http://www.se.com/search?q=site:'
start_urls = [ se_base + self.site_search_url, ]
def parse(self, response):
yield scrapy.Request(
response.urljoin(next_page),
callback=self.parse
)
What am I doing wrong here?
Thank you
The Python "TypeError: argument of type 'NoneType' is not iterable" occurs when we use the membership test operators (in and not in) with a None value. To solve the error, correct the assignment of the variable that stores None or check if it doesn't store None .
One way to avoid this error is to check before iterating on an object if that object is None or not. In addition, another way to handle this error: Python nonetype object is not iterable is to write the for loop in try-except block. Thirdly, it is to explicitly assign an empty list to the variable if it is None .
The error “TypeError: 'NoneType' object is not iterable” occurs when you try to iterate over a NoneType object. Objects like list, tuple, and string are iterables, but not None. To solve this error, ensure you assign any values you want to iterate over to an iterable object.
The Python "TypeError: 'NoneType' object is not iterable" occurs when we try to iterate over a None value. To solve the error, figure out where the variable got assigned a None value and correct the assignment or check if the variable doesn't store None before iterating.
Your start_requests
returns nothing, which mean returns None
in Python, while it should return an iterable of Request
objects. In your case easiest is to populate start_urls
in __init__
and don't override start_requests
:
class BrickSetSpider(scrapy.Spider):
se_base = 'http://www.se.com/search?q=site:'
def __init__(self, site_search_url):
self.start_urls = [self.se_base + site_search_url]
def parse(self, response):
yield scrapy.Request(
response.urljoin(next_page),
callback=self.parse
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With