Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the benefit of blocking cookie for clicked link? (SameSite=strict)

So for Google Chrome and Opera, cookies have SameSite attribute, which can have one of two values: strict or lax.

One of a few differences between those is that SameSite=strict will prevent a cookie from being sent when we click a link to another domain.

I know that SameSite is not W3C recommendation yet, but what is potential benefit of this behavior? I find it rather annoying, because the cookie is sent anyway when we refresh or click another link on the current domain. That leads to rather weird user experience - for example: we are logged out, then we click some domestic link or refresh and we are suddenly authenticated.

I'm aware that it's not designed for the greatest user experience, but rather for security. But what are we actually winning here in terms of security?

like image 981
Robo Robok Avatar asked Jan 25 '17 01:01

Robo Robok


2 Answers

The benefits of using strict instead of lax are limited. I can see two:

  1. Protection against CSRF attacks via GET requests. Such attacks are not normally possible since they rely on the server implementing GET endpoints with side effects (incorrectly and in violation of the semantics specified by RFC 7231). An example was given by you in the comments:

    Imagine we have a very bad design and all our actions are performed on GET method. The attacker placed link saying "Save puppies" which links to http://oursite.com/users/2981/delete. That's the only use-case I can think of - when we have some action done via GET method, while it shouldn't.

  2. Protection against timing attacks. There's a class of attacks - which were already discovered back in 2000, but which Mathias Bynens has recently explored and popularised - that involve a malicious webpage using JavaScript to initiate a request to a page on another domain and then measuring how long it takes, and inferring things about the user from the time taken. An example that Mathias invented is to initiate a request to a Facebook page with a restricted audience, such that it is only accessible by, say, people in Examplestan. Then the evil webpage times how long it takes for the response to come back. Facebook serves the error page when you try to access an inaccessible post faster than it serves the actual post, so if the evil webpage gets a quick response, it can infer that the user is not in Examplestan; if it gets a slow response, then the user is probably an Examplestani.

Since browsers don't stop executing JavaScript on a page when you do top-level navigation until they've received a response from the URL being navigated to, these timing attacks are unfortunately perfectly possible with top-level navigation; your evil page can just navigate the user away via location=whatever, then time how long it takes for the other page to load by repeatedly recording the current timestamp to localStorage in a loop. Then on a subsequent visit the evil page can check how long the page took to start unloading, and infer the response time of the page being timed.

The domain hosting the target page - like facebook.com, in the case of Mathias's example - could protect its users from this kind of attack by using samesite=strict cookies.

Obviously, these limited benefits come at a serious UX tradeoff, and so are typically not worth it compared to the already-pretty-good protections offered by samesite=lax!

like image 145
Mark Amery Avatar answered Jan 26 '23 08:01

Mark Amery


There should be an answer here so I'm just going to repeat what's already been said in the comments.

You should always use samesite=lax, unless you're OK with giving your users a terrible user experience. lax is secure enough, as cookies will only be sent for Safe Methods (i.e. GET) when referred from a different domain. If you do dangerous things with GET requests, well, you have bigger problems.

like image 21
Johnny Oshika Avatar answered Jan 26 '23 07:01

Johnny Oshika