I have a kubernetes cluster running on google container engine that defines a Pod running a NFS-Server, which I want to access in other Pods via various PersistentVolume
s.
What is the best way to configure the NFS Service, if it is in the same cluster?
According to various documentation ive found its not possible to rely on kube-dns for this, because the node starting the kubernetes pod is not configured to use it as its DNS.
So this is out of question (and really does not work - ive tested it, with various different hostname/FQDN...)
apiVersion: v1
kind: PersistentVolume
metadata:
name: xxx-persistent-storage
labels:
app: xxx
spec:
capacity:
storage: 10Gi
nfs:
path: "/exports/xxx"
server: nfs-service.default.svc.cluster.local # <-- does not work
I can start the NFS Server and check its ClusterIP via kubectl describe svc nfs-service
and then hardcode its Endpoint-IP for the PV (this works):
apiVersion: v1
kind: PersistentVolume
metadata:
name: xxx-persistent-storage
labels:
app: xxx
spec:
capacity:
storage: 10Gi
nfs:
path: "/exports/xxx"
server: 10.2.1.7 # <-- does work
But this feels wrong - as soon as I need to recreate the NFS-Service ill get a new IP and i have to reconfigure all the PVs based on it.
What is the best-practice here? Im surprised i did not find any example for it, because i supposed thats quite a normal thing to do - isnt it?
Is it possible to set a kind of static IP for a service, so that i can rely on having always the same IP for the NFS service?
You are on the right track. To make sure that your Service is using a static IP just add clusterIP: 1.2.3.3
under the spec:
section of the Service.
From the canonical example:
In the future, we'll be able to tie these together using the service names, but for now, you have to hardcode the IP.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With