Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Web Crawler - Ignore Robots.txt file?

Some servers have a robots.txt file in order to stop web crawlers from crawling through their websites. Is there a way to make a web crawler ignore the robots.txt file? I am using Mechanize for python.

like image 446
Craig Locke Avatar asked Dec 05 '11 14:12

Craig Locke


2 Answers

The documentation for mechanize has this sample code:

br = mechanize.Browser()
....
# Ignore robots.txt.  Do not do this without thought and consideration.
br.set_handle_robots(False)

That does exactly what you want.

like image 182
David Heffernan Avatar answered Sep 18 '22 14:09

David Heffernan


This looks like what you need:

from mechanize import Browser
br = Browser()

# Ignore robots.txt
br.set_handle_robots( False )

but you know what you're doing…

like image 30
eumiro Avatar answered Sep 21 '22 14:09

eumiro