Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using a session token or nonce for Cross-site Request Forgery Protection (CSRF)?

I inherited some code that was recently attacked where the attacker sent repeated remote form submissions.

I implemented a prevention using a session auth token that I create for each user (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).

  • https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
  • http://tyleregeto.com/a-guide-to-nonce
  • http://shiflett.org/articles/cross-site-request-forgeries

However, it still feels there is some vulnerability here. While I understand nothing is 100% secure, I have some questions:

  • Couldn't a potential attacker simply start a valid session then include the session id (via cookie) with each of their requests?
  • It seems an nonce would be better than session token. What's the best way to generate and track an nonce?
  • I came across some points about these solutions being only single window. Could someone elaborate on this point?
  • Do these solutions always require a session? Or can these tokens be created without a session? UPDATE, this particular page is just a single page form (no login). So starting a session just to generate a token seems excessive.
  • Is there a simpler solution (not CAPTCHA) that I could implement to protect against this particular attack that would not use sessions.

In the end, I am looking for a better understanding so I can implement a more robust solution.

like image 491
Jason McCreary Avatar asked Aug 03 '11 14:08

Jason McCreary


1 Answers

As far as I understand you need to do three things: make all of you changing-data actions avaliable only with POST request, disallow POST requests without valid referrer(it must be from the same domain) and check auth token in each POST request(POST token value must be the same as token in cookie).

First two will make it really hard to do any harmfull CSRF request as they are usually hidden images in emails, on other sites etc., and making cross-domain POST request with valid referer should be impossible/hard to do in modern browsers. The thid will make it completely impossible to do any harmfull action without stealing user's cookies/sniffing his traffic.

Now about your questions:

  1. This question really confuses me: if you are using auth tokens correctly then attacker must know user's token from cookie to send it along with request, so why starting a valid attacker's own session can do any harm?
  2. Nonces will make all your links ugly - I have never seen anyone using them anymore. And I think your site can be Dosed using it as you must save/search all the nounces in database - a lot of request to generate nounces may increase your database size really fast(and searching for them will be slow).
  3. If you allow only one nounce per user_id to prevent (2) Dos attack then if user opens a page, then opens another page and then submits the first page - his request will be denied as a new nounce was generated and the old one is already invalid.
  4. How else you will identify a unique user without a session ID be it a cookie, GET or POST variable?

UPD: As we are not talking abot CSRF anymore: you may implement many obscure defences that will prevent spider bots from submitting your form:

  1. Hidden form fields that should not be filled(bots usually fill most of form fields that they see that have good names, even if they are realy hidden for a user)
  2. Javascript mouse trackers (you can analyse recorded mouse movements to detect bots)
  3. File request logs analysis(when a page is loaded javascript/css/images should be loaded too in most cases, but some(really rare) users have it turned off)
  4. Javascript form changes(when a hidden(or not) field is added to a form with javascript that is required on server-side: bots usually don't execute javascript)
  5. Traffic analysis tools like Snort to detect Bot patterns (strange user-agents, too fast form submitting, etc.).

and more, but in the end of the day some modern bots use total emulation of real user behaviour(using real browser API calls) - so if anyone really want to attack your site, no defence like this will help you. Even CAPTCHA today is not very reliable - besides complex image recognition algorithms you can now buy 1000 CAPTCHA's solved by human for any site for as low as $1(you can find services like this mostly in developing countries). So really, there is no 100% defence against bots - each case is different: sometimes you will have to create complex defence system yourself, sometimes just a little tweak will help.

like image 122
XzKto Avatar answered Oct 10 '22 23:10

XzKto