Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Public facing Authentication mechanisms for REST

I am designing a new service that would enable 'customers' to register and pay a per-use type fee for particular searches they perform. This service would be exposed using a RESTFul and SOAP interface. Typically the web service would integrate with the customer's website and then be exposed to the 'public' where anyone would be able to use the customer's website and take advantage of my web service features (which the customer would pay for but have full control of moderating the requests so they don't get charged too much).

I want to design the service that optimises the integration to make it as simple as possible. The web service API will change so creating an internal proxy to expose the web service to the public in some cases is too much of a detractor for customers. So the issue as I see it is creating a web service that balances authentication, security and integration.

Ideal

  1. Not use OAuth
  2. Avoid forcing the customer to create an internal proxy which re-exposes the same web service API I have already.
  3. Be secure (token username/pass whatever and ssl)
  4. Embed a javascript library in customer website - This would be a client Javascript library to make integration steps even easier.
  5. The Javascript library would need to be secure enough so that the public wouldn't be able to simply grab credentials and re-purpose it themselves
  6. Not be too hacky, if possible, so the web service doesn't have to be re-built if Firefox 87 comes out (to be released in as many minutes) and decides to fubar it.

It seems that some kinda of 3-way authentication process is needed for this to work, i.e. authenticates a particular client (in the public), the web service (the customer) and the web service.

Has anyone implemented something kind of similar and how did they tackle a situation like this?

I also understand there is a balance between what can be done, and what would violate cross-domain security, so perhaps the entire web service might be exposed by another GET only interface which would return JSONP data.

/** Addendum **/

I have since discovered a web service that does what I'm looking after. However, I am not confident I understand the implementation details entirely. So perhaps someone could also elaborate on my thinking.

The web service I discovered seems to host the Javascript on the service side. The customer would then integrate their website with the service side by including the Javascript in a script tag, but supplies a key to do so i.e.

Somehow if I add the script to my website it doesn't work. So somewhere along the line the token must be registered to a particular customer domain, and the 'client-lib.js' is actually a servlet or something similar which can somehow detect that the user from the 'public' coming in has actually originated from the 'customer' domain.

Is my thinking right? Is there some kind of http header that can be used this way? Is that safe?

Cheers

like image 744
vcetinick Avatar asked Oct 06 '22 16:10

vcetinick


1 Answers

First of all - let me provide you a link to another SO question which I answered yesterday - as it gives a pretty extensive answer to a similar question-set.

I am assuming that you are going to charge the owner of the site from which the search is made, and not care so much who the individual user is who makes the search. If that's incorrect, please clarify and I will update my answer accordingly.

Clearly, in any such case, the first and foremost thing you need to do is to make sure you know which client this is on each request. And - as you said, you also want to make sure you're protecting yourself from cross-site attacks and people stealing your user's keys.

What you might consider would be the following:

  1. Create a private key on your side - which only your service knows.
  2. Whenever a new consumer site creates an account with you, create a new shared key which only you and they will know. I suggest creating this key by using your private key as a password, and encrypting some kind of identifier which will let you identify this particular user.
  3. As part of your registration process, make the consumer site tell you what URI they will be using your scripts on.

Now - the way that you both do your tracking and authentication becomes fairly simple.

You mentioned providing a JS library which won't need to update every time FF updates. I suggest building that library using jQuery, or another, similarly supported cross-browser JS foundational library - and letting that wrap your AJAX.

When the client site requests your script, however, have them provide you something like:

http://www.yourdomain.com/scripts/library.js?key={shared key}

On your side, when you receive this request, check the following:

  1. When you decrypt their shared key using your private key, you should not get gibberish. If you do - it's because their key has been altered in some way - and is not valid. This should result in a 401: Unauthorized error.
  2. Once you decrypt the key and know which client site this is (because that's what the key contains) - check to make sure that the request is coming from the same URI that client registered with. This now protects you from someone stealing their key and injecting it into a different website.
  3. As long as the above matches, let them download the file.

Now - when you serve the JS file, you can do so in a way that injects the key into that file - and therefore it has access to their shared key. On each of your AJAX requests, include that key so that you can identify which client this request is coming from again. In a RESTful environment, there shouldn't really be sessions - so you need this level of authentication on each post. I suggest including it as a cookie.

On your server-side - simply repeat the checks of their key on each subsequent request - and voila - you've built yourself some fairly tight security without a lot of overhead.

That said - if you expect a lot of traffic - you may want to come back to this and explore more deep security processes in the future, as rolling your own security matrix can leave unexpected holes. However - it is a good start and will get you off the ground.

Feel free to ask any questions if you need, and I will try to update my answer accordingly.

like image 150
Troy Alford Avatar answered Oct 10 '22 04:10

Troy Alford