Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is the ASP.NET cryptographic vulnerability work around a BIG LIE?

This question is somewhat of a follow up to How serious is this new ASP.NET security vulnerability and how can I workaround it? So if my question seems to be broken read over this question and its accepted solution first and then take that into the context of my question.

Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.

Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?

I see why adding the delay to the error page is somewhat relevant but doesn't this also just add another layer to fool the script into thinking the site is an invalid target?

What could be done to prevent this if the script takes into account that since the site is asp.net it's running the AES encryption that it ignores the timing of error pages and watches the redirection or lack of redirection as the response vector? If a script does this will that mean there's NO WAY to stop it?

Edit: I accept the timing attack reduction but the error page part is what really seems bogus. This attack vector puts their data into viewstate. There's only 2 cases. Pass. Fail.

Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.

Or Pass, they're on a page and the viewstate contains their inserted data.

Summary of this vulnerability


The cipher key from the WebResoure.axd / ScriptResource.axd is taken and the first guess of the validation key is used to generate a value of potential key with the ciphered text.

This value is passed to the WebResource.axd / ScriptResource.axd at this point if the decryption key was guessed correctly their response will be accepted but since the data is garbage that it's looking for the WebResource.axd / ScriptResource.axd will return a 404 error.

If the decryption key was not successfully guessed it will get a 500 error for the padding invalid exception. At this point the attack application knows to increment the potential decryption key value and try again repeating until it finds the first successful 404 from the WebResource.axd / ScriptResource.axd

After having successfully deduced the decryption key this can be used to exploit the site to find the actual machine key.

like image 361
Chris Marisic Avatar asked Sep 23 '10 13:09

Chris Marisic


4 Answers

re:

How does this have relevance on whether they're redirected to a 200, 404 or 500? No one can answer this, this is the fundamental question. Which is why I call shenanigans on needing to do this tom foolery with the custom errors returning a 200. It just needs to return the same 500 page for both errors.

I don't think that was clear from the original question, I'll address it:

who said the errors need to return 200? that's wrong, you just need all the errors to return the same code, making all errors return 500 would work as well. The config proposed as a work around just happened to use 200.

If you don't do the workaround (even if its your own version that always returns 500), you will see 404 vs. 500 differences. That is particularly truth in webresource.axd and scriptresource.axd, since the invalid data decrypted is a missing resource / 404.

Just because you don't know which feature had the issue, doesn't mean there aren't features in asp.net that give different response codes in different scenarios that relate to padding vs. invalid data. Personally, I can't be sure if there is any other feature that gives different response code as well, I just can tell you those 2 do.


Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.

Sri already answered that very clearly in the question you linked to.

Its not about hiding than an error occurred, is about making sure the attacker can't tell the different between errors. Specifically is about making sure the attacker can't determine if the request failed because it couldn't decrypt /padding was invalid, vs. because the decrypted data was garbage.

You could argue: well but I can make sure it isn't garbage to the app. Sure, but you'd need to find a mechanism in the app that allows you to do that, and the way the attack works you Always need at least a tiny bit of garbage in the in message. Consider these:

  • ScriptResource and WebResource both throw, so custom error hides it.
  • View state is by default Not encrypted, so by default its Not involved the attack vector. If you go through the trouble of turning the encryption on, then you very likely set it to sign / validate it. When that's the case, the failure to decrypt vs. the failure to validate is the same, so the attacker again can't know.
  • Auth ticket also signs, so its like the view state scenario
  • Session cookies aren't encrypted, so its irrelevant

I posted on my blog how the attack is getting so far like to be able to forge authentication cookies.

Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?

As mentioned above, you need to find a mechanism that behaves that way i.e. decrypted garbage stays on the same page instead of throwing an exception / and thus getting you to the same error page.

Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.

Or Pass, they're on a page and the viewstate contains their inserted data.

Read what I mentioned about the view state above. Also note that the ability to more accurately re-encrypt is gained After they gained the ability to decrypt. That said, as mentioned above, by default view state is not that way, and when its on its usually accompanied with signature/validation.

like image 75
eglasius Avatar answered Nov 12 '22 22:11

eglasius


I am going to elaborate on my answer in the thread you referenced.

To pull off the attack, the application must respond in three distinct ways. Those three distinct ways can be anything - status codes, different html content, different response times, redirects, or whatever creative way you can think of.

I'll repeat again - the attacker should be able to identify three distinct responses without making any mistake, otherwise the attack won't work.

Now coming to the proposed solution. It works, because it reduces the three outcomes to just two. How does it do that? The catch-all error page makes the status code/html/redirect all look identical. The random delay makes it impossible to distinguish between one or the other solely on the basis of time.

So, its not a lie, it does work as advertised.


EDIT : You are mixing things up with brute force attack. There is always going to be a pass/fail response from the server, and you are right it can't be prevented. But for an attacker to use that information to his advantage will take decades and billions of requests to your server.

The attack that is being discussed allows the attacker to reduce those billions of requests into a few thousands. This is possible because of the 3 distinct response states. The workaround being proposed reduces this back to a brute-force attack, which is unlikely to succeed.

like image 39
Sripathi Krishnan Avatar answered Nov 12 '22 22:11

Sripathi Krishnan


The workaround works because:

  • You do not give any indication about "how far" the slightly adjusted took you. If you get another error message, that is information you can learn from.
  • With the delay you hide how long the actual calculation took. So you do not get information, that shows if you got deeper into the system, that you can learn from.
like image 43
GvS Avatar answered Nov 12 '22 23:11

GvS


No, it isn't a big lie. See this answer in the question you referenced for a good explanation.

like image 34
Wyatt Barnett Avatar answered Nov 12 '22 21:11

Wyatt Barnett