I've followed the steps at https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine to set up MySQL user accounts and service accounts. I've downloaded the JSON file containing my credentials.
My issue is that in the code I copied from the site:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
the path /secrets/cloudsql/credentials.json is specified and I have no idea where it's coming from.
I think I'm supposed to create the credentials as a secret via
kubectl create secret generic cloudsql-instance-credentials --from-file=k8s\secrets\my-credentials.json
But after that I have no idea what to do. How does this secret become the path /secrets/cloudsql/credentials.json
?
you have to add a volume entry under the spec like so:
volumes:
- name: cloudsql-instance-credentials
secret:
defaultMode: 420
secretName: cloudsql-instance-credentials
Note: This belongs to the deployment spec not the container spec.
Edit: Further Information can be found here: https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-pod-that-has-access-to-the-secret-data-through-a-volume thanks shalvah for pointing that out.
Actually we can mount configmaps or secrets as files in the pod's container runtime. And then in runtime we can use them in whatever case we need. But to do that, we need to properly set up them.
.spec.volumes
in the pod (if you deploy the pod using deployment then add volume in .spec.template.spec.volumes
).spec.container[].volumemount
Ref: official kubernetes doc
There is a sample for your use case:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
defaultMode: 511
secretName: cloudsql-instance-credentials
The current answers are good, but I wanted to provide a more complete example. This came verbatim from some of the old google docs from two years ago (which no longer exist). Replace the @@PROECT@@ and @@DBINST@@ with your own values.
The volumes
loads a secret, then volumeMounts
makes it visible to the postgres-proxy
container at /secrets/cloudsql
spec:
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: cloudsql
emptyDir:
containers:
- name: postgres-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.09
imagePullPolicy: Always
command: ["/cloud_sql_proxy",
"--dir=/cloudsql",
"-instances=@@PROJECT@@:us-central1:@@DBINST@@=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With