Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Security: Block bad spiders and bots from access to website using htaccess and HTTP_USER_AGENT

When building an htaccess rule to block common spiders and bots, what HTTP_USER_AGENT headers should be filtered?

 RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR]
 RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Custo [OR]
 RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR]
 RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR]
 RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR]
 RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR]
 RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR]
 RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR]
 RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR]
 RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR]
 RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR]
 RewriteCond %{HTTP_USER_AGENT} ^HMView [OR]
 RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR]
 RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR]
 RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR]
 RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR]
 RewriteCond %{HTTP_USER_AGENT} ^larbin [OR]
 RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR]
 RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR]
 RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR]
 RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR]
 RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR]
 RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR]
 RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR]
 RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR]
 RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR]
 RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR]
 RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR]
 RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR]
 RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR]
 RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Widow [OR]
 RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR]
 RewriteCond %{HTTP_USER_AGENT} ^Zeus
 ## Return 403 Forbidden error.
 RewriteRule .* - [F]
like image 737
WooDzu Avatar asked Aug 18 '11 20:08

WooDzu


People also ask

How to block crawlers and bots?

Using Both Disallow In Robots.txt and noindex on the page itself. If you do both, Google cannot crawl the page to see the noindex, so it could potentially still index the page anyway. This is why you should only use one or the other, and not both.

How do I block PetalBot?

How to Block PetalBot from Visiting Your Site. PetalBot complies with the Internet robots protocol. You can use the robots. txt file to completely prevent PetalBot from accessing your website, or to prevent PetalBot from accessing some files on your website.


1 Answers

This is how I do it:

SetEnvIfNoCase user-agent "^BlackWidow" bad_bot 
# and all others from your list
SetEnvIfNoCase user-agent "^Zeus" bad_bot 
<FilesMatch "(.*)">
Order Allow,Deny
Allow from all
Deny from env=bad_bot
</FilesMatch>
like image 84
Gino Sullivan Avatar answered Nov 13 '22 10:11

Gino Sullivan