Is there a way to mount a Kerberos authenticated NFS server inside a Kubernetes pod as the user who created the pod?
We use FreeIPA for user management, and we have a Kubernetes cluster setup for training our deep learning models. We have our data on an NFS, which is authenticated using Kerberos. Here is what we are trying to achieve:
We are using GKE for kubernetes and our NFS is in the same VPC.
This is how I do it.
For my approach you need:
The reasons for this approach are:
With all this in mind you first write down your Dockerfile for the krb5-sidecar. command: ["/bin/sh"] args: ["-c", "/usr/bin/sleep 3600000"]
FROM centos:centos7
# install the kerberos client tools
RUN yum install -y krb5-workstation && \
mkdir /krb5 && chmod 755 /krb5
# add resources, the kinit script and the default krb5 configuration
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /krb-sidecar-entrypoint.sh
# Little trick here that will allow my container to remove
# the vault secrets without root
RUN chmod u+s /usr/bin/rm
ENTRYPOINT ["/entrypoint.sh"]
And this is the entrypoint script that manages
/vault/secrets
file of the keytab
# Default value for renewing the TGT ticket
KERBEROS_RENEWAL_TIME=86400 # One day
# Move the keytab into keytabfile
echo "Generating keytab file"
cat /vault/secrets/${USERNAME}.keytab | cut -d' ' -f2 | base64 -d > /etc/${USERNAME}.keytab
# Get the TGT
echo "Loading keytab"
kinit -kt /etc/${USERNAME}.keytab ${USERNAME}@${REALM}
# Remove secrets for security reasons
rm -rf /vault/secrets/*
rm -rf /etc/${USERNAME}.keytab
echo "Secrets removed from tmpfs"
while :;
do
kinit -R
sleep ${KERBEROS_RENEWAL_TIME}
done
Of course you need to create PersistentVolumes and PersistentVolumeClaims for the deployment.
PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: NFS-vol
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- sec=krb5
nfs:
path: /exports
server: nfs.server.test
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfsvol
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
And finally the Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-user
spec:
selector:
matchLabels:
test: test
template:
metadata:
labels:
test: test
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-secret-userKeytab: 'user/keytabs/user'
vault.hashicorp.com/role: 'nfs'
vault.hashicorp.com/ca-cert: 'certs/ca.crt'
vault.hashicorp.com/tls-secret: 'tls-ca'
vault.hashicorp.com/agent-pre-populate-only: "true"
spec:
securityContext:
# Here we defined the user uid, this user must be present in the NFS server
runAsUser: 2500
runAsGroup: 2500
# This may be needed or not depending on your DNS setup
hostAliases:
- ip: "192.168.111.130"
hostnames:
- "IPA"
- "IPA.server"
- ip: "192.168.111.131"
hostnames:
- "nfs"
- "nfs.serer"
restartPolicy: Always
volumes:
- name: nfs-user
persistentVolumeClaim:
claimName: nfs-vol
- name: krb5
configMap:
name: keos-kerberos-config
- name: kcmsocket
hostPath:
path: /var/run/.heim_org.h5l.kcm-socket
type: File
containers:
- name: krb5-sidecar
image: krb5-sidecar:0.1.0
env:
- name: KRB5CCNAME
value: "KCM:"
- name: USERNAME
value: user
- name: REALM
value: server
volumeMounts:
- name: krb5
mountPath: "/etc/krb5.conf"
subPath: "krb5.conf"
- name: kcmsocket
mountPath: "/var/run/.heim_org.h5l.kcm-socket"
lifecycle:
preStop:
exec:
command: ["/usr/bin/kdestroy"]
- name: mount-nfs-container
image: nfs-centos:0.2.0
env:
- name: KRB5CCNAME
value: "KCM:"
volumeMounts:
- name: nfs-user
mountPath: "/nfs"
- name: krb5
mountPath: "/etc/krb5.conf"
subPath: "krb5.conf"
- name: kcmsocket
mountPath: "/var/run/.heim_org.h5l.kcm-socket"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With