Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I protect OAuth keys from a user decompiling my project?

I am writing my first application to use OAuth. This is for a desktop application, not a website or a mobile device where it would be more difficult to access the binary, so I am concerned on how to protect my application key and secret. I feel it would be trivial to look at the complied file and find the string that stores the key.

Am I over reacting or is this a genuine problem (with a known solution) for desktop apps?

This project is being coded in Java but I am also a C# developer so any solutions for .NET would be appreciated too.

EDIT: I know there is no perfect solution, I am just looking for mitigating solutions.

EDIT2: I know pretty much only solution is use some form of obfuscation. Are there any free providers for .NET and Java that will do string obfuscation?

like image 376
Scott Chamberlain Avatar asked Oct 01 '11 22:10

Scott Chamberlain


People also ask

How do I protect my API keys?

If you store API keys or any other private information in files, keep the files outside your application's source tree to keep your keys out of your source code control system. This is particularly important if you use a public source code management system, such as GitHub.

How do I share API keys securely?

Before sharing your API key, regenerate it and label it as the newest shared key. Don't share API keys through email. Always use HTTPS/SSL for your API requests — some APIs won't field your request if you're not using it. Assign a unique API key to each project and label them accordingly.

What is the most secure method to transmit an API key?

OAuth. OAuth is popular security mechanism that is widely used for user authentication. Similar to how a logged in session works on a website, OAuth requires the client user to “login” to the Web API before allowing access to the rest of the service. This is achieved by exposing a single endpoint for the login process.


2 Answers

OAuth is not designed to be used in the situation you described, i.e. its purpose is not to authenticate a client device to a server or other device. It is designed to allow one server to delegate access to its resources to a user who has been authenticated by another server, which the first server trusts. The secrets involved are intended to be kept secure at the two servers.

I think you're trying to solve a different problem. If you're trying to find a way for the server to verify that it is only your client code that is accessing your server, you're up against a very big task.

like image 37
David Pope Avatar answered Sep 25 '22 21:09

David Pope


There is no good or even half good way to protect keys embedded in a binary that untrusted users can access.

There are reasons to at least put a minimum amount of effort to protect yourself.

The minimum amount of effort won't be effective. Even the maximum amount of effort won't be effective against a skilled reverse engineer / hacker with just a few hours of spare time.

If you don't want your OAuth keys to be hacked, don't put them in code that you distribute to untrusted users. Period.

Am I over reacting or is this a genuine problem (with a known solution) for desktop apps?

It is a genuine problem with no known (effective) solution. Not in Java, not in C#, not in Perl, not in C, not in anything. Think of it as if it was a Law of Physics.


Your alternatives are:

  • Force your users to use a trusted platform that will only execute crypto signed code. (Hint: this is most likely not practical for your application because current generation PC's don't work this way. And even TPS can be hacked given the right equipment.)

  • Turn your application into a service and run it on a machine / machines that you control access to. (Hint: it sounds like OAuth 2.0 might remove this requirement.)

  • Use some authentication mechanism that doesn't require permanent secret keys to be distributed.

  • Get your users to sign a legally binding contract to not reverse engineer your code, and sue them if they violate the contract. Figuring out which of your users has hacked your keys is left to your imagination ... (Hint: this won't stop hacking, but may allow you to recover damages, if the hacker has assets.)


By the way, argument by analogy is a clever rhetorical trick, but it is not logically sound. The observation that physical locks on front doors stop people stealing your stuff (to some degree) says nothing whatsoever about the technical feasibility of safely embedding private information in executables.

And ignoring the fact that argument by analogy is unsound, this particular analogy breaks down for the following reason. Physical locks are not impenetrable. The lock on your front door "works" because someone has to stand in front of your house visible from the road fiddling with your lock for a minute or so ... or banging it with a big hammer. Someone doing that is taking the risk that he / she will be observed, and the police will be called. Bank vaults "work" because the time required to penetrate them is a number of hours, and there are other alarms, security guards, etc. And so on. By contrast, a hacker can spend minutes, hours, even days trying to break your technical protection measures with effectively zero risk of being observed / detected doing it.

like image 59
Stephen C Avatar answered Sep 24 '22 21:09

Stephen C