We use Docker containers for most of our work, including development on our own machines. These are ephemeral (started each time we run a test, for example).
For AWS, the auth is easy - we have our keys in our environment, and those are passed through to the container.
We're starting to use Google Cloud services, and the auth path seems harder than AWS. When doing local development, gcloud auth login
works well. But when working in an ephemeral container, the login process would be needed each time, and I haven't found a way of persisting user credentials using either a) environment variables or b) mapping volumes - which are the two ways of passing data to containers.
From what I can read, the only path is to use service accounts. But I think then everyone needs their own service account, and needs to be constantly updating that account's permissions to be aligned with their own.
Is there a better way?
When gsutil is installed/used via the Cloud SDK ("gcloud"), credentials are stored by Cloud SDK in a non-user-editable file located under ~/. config/gcloud (any manipulation of credentials should be done via the gcloud auth command).
Ephemeral storage types (for example, emptyDir) do not persist after the Pod ceases to exist. These types are useful for scratch space for applications. You can manage your local ephemeral storage resources as you do your CPU and memory resources. Other volume types are backed by durable storage. Persistent volumes.
The easiest for making a local container see the gcloud credentials might be mapping the file system location of the application default credentials into the container.
First, do
gcloud auth application-default login
Then, run your container as
docker run -ti -v=$HOME/.config/gcloud:/root/.config/gcloud test
This should work. I tried it with a Dockerfile like
FROM node:4
RUN npm install --save @google-cloud/storage
ADD test.js .
CMD node ./test.js
and the test.js
file like
var storage = require('@google-cloud/storage');
var gcs = storage({
projectId: 'my-project-515',
});
var bucket = gcs.bucket('my-bucket');
bucket.getFiles(function(err, files) {
if (err) {
console.log("failed to get files: ", err)
} else {
for (var i in files) {
console.log("file: ", files[i].name)
}
}
})
and it worked as expected.
I had the same issue, but I was using docker-compose. This was solved with adding following to docker-compose.yml
:
volumes:
- $HOME/.config/gcloud:/root/.config/gcloud
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With