What is the best way to deploy Google service account credentials inside a custom built CentOS Docker container for running either on Google's Container Engine or their 'container-vm'? This behavior happens automatically on the google/cloud-sdk container, which runs debian and includes things I'm not using such as app-eng/java/php. Ideally I am trying to access non-public resources inside my project, e.g., Google Cloud Storage bucket objects, without loging in and authorizing every single time a large number of these containers are launched.
For example, on a base Centos container running on GCE with custom code and gcloud/gsutil installed, when you run:
docker run --rm -ti custom-container gsutil ls
You are prompted to run "gsutil config" to gain authorization, which I expect.
However, pulling down the google/cloud-sdk container onto the same GCE and executing the same command, it seems to have cleverly configured inheritance of credentials (perhaps from the host container-vm's credentials?). This seems to bypass running "gsutil config" when running the container on GCE to access private resources.
I am looking to replicate that behavior in a minimal build Centos container for mass deployment.
Update: as of 15 Dec 2016, the ability to update the scopes of an existing VM is now in beta; see this SO answer for more details.
Old answer: One approach is to create the VM with appropriate scopes (e.g., Google Cloud Storage read-only or read-write) and then all processes on the VM, including containers, will have access to credentials that they can use via OAuth 2.0; see docs for Google Cloud Storage and Google Compute Engine.
Note that once a VM is created with some set of scopes, they cannot be changed later (neither added nor removed), so you have to be sure to set the right set of scopes at the time of VM instance creation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With