Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is passing application configuration in stdin a secure alternative to environment variables?

I'm trying to figure out the best approach to web application configuration. The goals are:

  1. Configurability. Force the configuration to be specified in deploy time. Make sure the configuration is kept separate from the code or deployable artifact.
  2. Security. Keep secrets from leaking from deployment environment.
  3. Simplicity. Make sure the solution is simple, and natural to the concept of OS process.
  4. Flexibility. Makes no assumptions about where the configuration is stored.

According to 12 factor app web application configuration is best provided in environment variables. It is simple and flexible but it looks like there are some security concerns related to this.

Another approach could be to pass all the configuration as command line arguments. This again is simple, flexible and natural to the OS but the whole configuration is then visible in host's process list. This might or might not be an issue (I'm no OS expert) but the solution is cumbersome at least.

A hybrid approach is taken by a popular framework Dropwizard where command line argument specifies config file location and the config is read from there. The thing is it brakes the flexibility constraint making assumptions about the location of my configuration (local file). It also makes my application implement file access which, while often easily achieved in most languages/frameworks/libraries, is not inherently simple.

I was thinking of another approach which would be to pass all the configuration in application's stdin. Ultimately one could do cat my-config-file.yml | ./my-web-app in case of locally stored file or even wget https://secure-config-provider/my-config-file.yml | ./my-web-app. Piping seems simple and native to OS process. It seems extremely flexible as well as it separates the question of how is the config provided onto host OS.

The question is whether it conforms to the security constraint. Is it safe to assume that once piped content has been consumed it is permanently gone?

I wasn't able to google anyone trying this hence the question.

like image 208
maciekszajna Avatar asked May 05 '16 11:05

maciekszajna


1 Answers

Writing secrets into the stdin of a process is more secure than environment variables - if done correctly.

In fact, it is the most secure way I know of to pass secrets from one process to another - again if done correctly.

Of course this applies to all file-like inputs which do not have file system presence and which otherwise cannot be opened by other processes, stdin is just one instance of that which is available by default and easy to write to.

Anyway, the key thing with environment variables, as the post you linked describes, is that once you put something into the environment variables it leaks into all child processes, unless you take care to clean it up.

But also, it's possible for other processes running as your user, or as any privileged/administrative user, to inspect the environment of your running process.

For example, on Linux, take a look at the files /proc/*/environ. That file exists for each running process, and you can inspect its contents for any process that is running as your user. If you are root, you can look at the environ of any process of any user.

This means that any local code execution exploit, even some unprivileged ones, could get access to your secrets in your environment variables, and it is very simple to do so. Still better than having them in a file, but not by much.

But when you pipe things into the stdin for a process, outside processes can only intercept that if they are able to use the debugging system calls to "attach" to the process, and monitor system calls or scan its memory. This is a much more complex process, it's less obvious where to look, and most importantly, it can be secured more.

For example Linux can be configured to prevent unprivileged user processes from even calling the debugger system calls onto other processes started by the same user that they didn't start, and some distros are starting to turn this on by default.

This means that a properly executed writing of data to stdin is in almost all cases at least as or more secure than using an environment variable.


Note, however, that you have to "do it correctly". For example, these two won't give you the same security benefits:

  • my-command </some/path/my-secret-config
  • cat /some/path/my-secret-config | my-command

Because the secret still exists on disk. So you get more flexibility but not more security. (If, however, the cat is actually sudo cat or otherwise has more privileged access to the file than my-command, then it could be a security benefit.)

Now let's look at a more interesting case:

  • echo "$my_secret_data" | my-command

Is this more or less secure than an environment variable? It depends:

If you are calling this inside a typical shell, then echo is probably a "builtin", which means the shell never needs to execute an external echo program, and the variable stays within its memory before being written to the stdin.

But if you are invoking something like this from outside of a shell, then this is actually a big security leak, because it will put the variable into the command line of the executed external echo program, which on many systems can be seen by any other running process, even other unprivileged users!

So as long as you understand that, and use the right functionality to make sure you are writing directly from whatever has the credentials to your process, stdin is probably the most secure option you have.


TL;DR: stdin can give you a much smaller "surface area" for the data to leak, which means that it can help you get more security, but whether or not you do depends on exactly how you're using it and how the rest of your system is set up.

Personally, I use stdin for passing in secret data whenever I can.

like image 174
mtraceur Avatar answered Oct 16 '22 04:10

mtraceur