I currently have healthmonitoring implemented for a public facing website. I am using the SimpleMailWebEventProvider to send emails out when errors happen. "All Errors".
I am hoping someone who has experience with this will be able to show me an easy way to prevent emails from being sent in the case of "A potentially dangerous Request.Path value was detected from the client (:)" I can see these errors and can tell they are coming from a bot and not a human, by their timing (all at once) and by the url being requested
Request path: /Scripts/,data:c,complete:function(a,b,c){c=a.responseText,a.isResolved()&&(a.done(function(a){c=a}),i.html(g
I like the fact that .Net is throwing an error in these cases but these emails account for probably 90% of all healthmonitoring emails I get. Reading through all of them to locate error emails that indicate a code problem with the website is a hassle.
I would like to avoid creating my own MailEventProvider, although I have in the past but I believe I ended up having to use ILSpy to create my own since SimpleMailWebEventProvider is Sealed.
To filter exceptions caused by robots, I usually call Server.ClearError() in Application_Error handler in Global.asax, this prevents health monitoring from processing unhandled exceptions. However, if you use health monitoring with event log, this will also prevent errors from appearing in event log.
void Application_Error(object sender, EventArgs e)
{
var exception = Server.GetLastError();
if (exception is HttpException && exception.Message.Contains("A potentially dangerous Request.Path value was detected from the client"))
{
Server.ClearError();
}
}
In real app, I think it makes sense to use some additional conditions to ensure that error comes from robot, like taking into account IP address, url, etc.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With