Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implementing OAuth 2 in a multi-tenant application using dynamic scopes

I'm currently trying to migrate a multi-tenant system from a "custom" authentication and authorization implementation to OAuth2.

The multi-tenancy model is very similar to GitHub's structure, so I'm going to use it as the main example. Let's assume that in the application we have users, repositories and organizations. Users have access to repositories directly, or through organizations they are members off. Depending on their access rights, users should have different permissions towards repositories and the sub-resources (like /repository/issues), or organizations and their sub-resources (/organization/members) for users who manage them. Unlike, GitHub's OAuth2 solution, this system should be able to provide different levels of permissions across repositories or organizations (GitHub does it at a different level with a custom implementation).

The goal is to keep the logic as simple as possible, encapsulate everything in an authorization service and piggyback on OAuth2 as much as possible.

My approach was to deploy a generic OAuth2 service, and handle the permissions using dynamic scopes:

  • user:read
  • user:write
  • repo:read
  • org:read
  • repo:<repo_id>:issues:read
  • repo:<repo_id>:issues:write
  • org:<org_id>:members:read
  • org:<org_id>:members:write

This enables granular permissions for clients and users, such as a user being able to read + write issues in one of his repos, but only read in another.

While this seems to solve the problem, the main limitation is being able to request scopes. Since users would not know the ids for the repos and orgs they have access to, they are not able to request a correct list of scopes when contacting the authorization server.

In order to overcome this I considered 2 solutions:

Solution 1

  1. Issue a token for repo:read and org:read
  2. Retrieve list of repos and orgs the user has access to
  3. Issue a second token with all necesarry scopes

On a deeper thought, this turns out not to be viable since it would not support grants like implicit for authorization_code unless the authorization server would deal with this "discovery" of resources.

Solution 2

The first 2 steps are common to the first solution, while for the 3'rd step, the users would only be able to issue tenant scoped tokens. By extending the OAuth2 with a parameter identifying the tenant (/authorize?...&repo=<repo_id>), clients using authorization_code grant would have to issue tokens for every tenant. The token issued on step 1 would have to persist the identity of the user on the authorization server and eliminate the need of re-authentication when a user would switch between tenants. The downside of this approach would be that it increases the complexity of client integrations and that it might defy the standard in some way.

I'm looking for a second opinion on this, which would possibly simplify the problem and make sure the solution adheres to the standard.

like image 945
Razvan Avatar asked Aug 02 '18 16:08

Razvan


2 Answers

tldr; What about using self contained access tokens which convey user identity information and hold access policy defined at API endpoint ?

The problem you face right now is due to mismatch of what OAuth 2.0 scope is capable of. Scope value in OAuth 2.0 is defined to be used by the client application.

The authorization and token endpoints allow the client to specify the scope of the access request using the "scope" request parameter.

But in your approach, you try to make it to be defined by end user (the human user).

A solution would be to make authorization server independent of permission details. That means, authorization server only issue tokens which are valid for your service/system. These token can be self-contained, holding user identifier and optionally organisation details (claims). It may contain other details that are required by your service (upto you to decide). Ideal format is to make it an JWT.

Once your client (the consumer of system, like GIT website) obtain this token, it can call the system backend. Once your system backed recieve the token, it can validate the token for integrity, required claims and use these claims to identify what resources are granted for this specific user. Permission levels you defined for scope now are stored with your service backend.

Advantage of this is the ability to let user identity to be reside anywhere. For example you can use Google or Auzure AD and as long as they can provide you a valid token, you can support such users to use your system. This is ideal as permissions are not stored in them. And super users will have ability to define and maintain these permissions.

like image 136
Kavindu Dodanduwa Avatar answered Sep 28 '22 12:09

Kavindu Dodanduwa


Agree with everything mentioned by @Kavindu Dodanduwa but would like to add some additional details here.

This problem indeed stays beyond what standard OAuth 2.0 covers. If you want to manage permissions per-resource (e.g. per repo or organization) this should be handled in your service or a gateway in front of it. Typically you need some kind of access-control list (ACL) stored on your backend which you can then use to authorize users.

If you'd like to look at existing standards check out XACML and UMA (which is an extension of OAuth 2.0). However I find them rather complicated to implement and manage especially in distributed environment.

Instead I'd suggest an alternative solutions using a sidecar for performing authorization on your service. Check out related blog posts:

Open Policy Agent could be a good solution for such architecture.

like image 32
Yuriy Yunikov Avatar answered Sep 28 '22 13:09

Yuriy Yunikov