Let's say, I have a dynamic page that creates URL's
from user inputs.
For example: www.XXXXXXX.com/browse
<-------- (Browse being the page)
Every time user enters some query, it generates more pages.
For example: www.XXXXXXX.com/browse/abcd
<-------- (abcd being the new page)
Now, I want Google to do crawl this "browse" page but not the sub pages generated by it.
I'm thinking of adding this to my robots.txt page; "Disallow: /browse/"
Would that be the right thing to do? or will it also prevent the Googlebot from crawling the "browse" page? What should I do to get the optimal result?
URL doesn't end with slash:
www.XXXXXXX.com/browse
Therefore this code should work:
User-agent: *
Disallow: /browse/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With