I'm looking into building a content site with possibly thousands of different entries, accessible by index and by search.
What are the measures I can take to prevent malicious crawlers from ripping off all the data from my site? I'm less worried about SEO, although I wouldn't want to block legitimate crawlers all together.
For example, I thought about randomly changing small bits of the HTML structure used to display my data, but I guess it wouldn't really be effective.
txt file used for? You can use a robots. txt file for web pages (HTML, PDF, or other non-media formats that Google can read), to manage crawling traffic if you think your server will be overwhelmed by requests from Google's crawler, or to avoid crawling unimportant or similar pages on your site.
Any site that it visible by human eyes is, in theory, potentially rippable. If you're going to even try to be accessible then this, by definition, must be the case (how else will speaking browsers be able to deliver your content if it isn't machine readable).
Your best bet is to look into watermarking your content, so that at least if it does get ripped you can point to the watermarks and claim ownership.
Between this:
What are the measures I can take to prevent malicious crawlers from ripping
and this:
I wouldn't want to block legitimate crawlers all together.
you're asking for a lot. Fact is, if you're going to try and block malicious scrapers, you're going to end up blocking all the "good" crawlers too.
You have to remember that if people want to scrape your content, they're going to put in a lot more manual effort than a search engine bot will... So get your priorities right. You've two choices:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With