I have an ASP.NET MVC 4 web application that can be accessed from multiple different domains. The site is fully localized based on the domain in the request (similar in concept to this question).
I want to include a robots.txt file and I want to localize the robots.txt file based on the domain, but I am aware that I can only have one physical "robots.txt" text file in a site's file system directory.
What is the easiest/best way (and is it even possible) to use the ASP.NET MVC framework to achieve a robots.txt file on a per-domain basis so that the same site installation serves content to every domain, but the content of the robots file is localized depending on the domain requested?
Robots.txt by subdomain and protocol For example, a site can have one robots. txt file sitting on the non-www version, and a completely different one sitting on the www version.
Your site can have only one robots. txt file.
No, a robots. txt file is not required for a website. If a bot comes to your website and it doesn't have one, it will just crawl your website and index pages as it normally would.
Robots. txt is actually fairly simple to use. You literally tell robots which pages to “Allow” (which means they'll index them) and which ones to “Disallow” (which they'll ignore). You'll use the latter only once to list the pages you don't want spiders to crawl.
The process is reasonably simple:
ContentResult
and setting the ContentType
to "text/plain"
FilePathResult
if your robots files are just files on disk, through one of the helper methods on the Controller
class such as File(name, "text/plain")
The following sample assumes a single top level robots.txt file:
// In App_Start/RouteConfig: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "robots", url: "robots.txt", defaults: new { controller = "Seo", action = "Robots" } ); // The controller: public class SeoController : Controller { public ActionResult Robots() { var robotsFile = "~/robots-default.txt"; switch (Request.Url.Host.ToLower()) { case "stackoverflow.com": robotsFile = "~/robots-so.txt"; break; case "meta.stackoverflow.com": robotsFile = "~/robots-meta.txt"; break; } return File(robotsFile, "text/plain"); } }
One of the easiest ways to get this to work is then to ensure that the routing module is called for all requests using runAllManagedModulesForAllRequests
in web.config (don't use this, see the next paragraph):
<system.webServer> <handlers> ... </handlers> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer>
This is not a good thing in general as now all the static files (css, js, txt) go through managed handlers before being diverted to the static file handler. IIS is really good at serving static files fast (a largely static file website will max out your disk I/O way before the CPU), so to avoid this performance hit the recommended approach is as the web.config sample section below. Note the similarity to the ExtensionlessUrlHandler-Integrated-4.0
handler in the Visual Studio MVC 4 template applications:
<system.webServer> <handlers> <add name="Robots-Integrated-4.0" path="/robots.txt" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> ... the original handlers ... </handlers> <modules runAllManagedModulesForAllRequests="false" /> </system.webServer>
The advantages of this type of approach become apparent once you start using it:
On the downside,
Remember also that different robots.txt files can be used for different subdirectories. This gets tricky with the route and controller approach, so the IHttpHandler
approach (below) is easier for this situation.
You can also do this with a custom IHttpHandler
registered in your web.config. I emphasise custom as this avoids the need to make ALL controllers see ALL requests (with runAllManagedModulesForAllRequests="true"
, unlike adding a custom route handler into your route table.
This is also potentially a more lightweight approach than the controller, but you would have to have enormous site traffic to notice the difference. It's other benefit is a reuseable piece of code you can use for all your sites. You could also add a custom configuration section to configure the robot user agent/domain name/path mappings to the robots files.
<system.webServer> <handlers> <add name="Robots" verb="*" path="/robots.txt" type="MyProject.RobotsHandler, MyAssembly" preCondition="managedHandler"/> </handlers> <modules runAllManagedModulesForAllRequests="false" /> </system.webServer>
public class RobotsHandler: IHttpHandler { public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { string domain = context.Request.Url.Host; // set the response code, content type and appropriate robots file here // also think about handling caching, sending error codes etc. context.Response.StatusCode = 200; context.Response.ContentType = "text/plain"; // return the robots content context.Response.Write("my robots content"); } }
To serve robots for subdirectories as well as the site root you can't use the controller approach easily; the handler approach is simpler in this scenario. This can be configured to pick up robots.txt file requests to any subdirectory and handle them accordingly. You might then choose to return 404 for some directories, or a subsection of the robots file for others.
I specifically mention this here as this approach can also be used for the sitemap.xml files, to serve different sitemaps for different sections of the site, multiple sitemaps that reference each other etc.
Other References:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With