I am trying to setup MongoDB as a standalone on minikube with Persistent Volume using basic auth. On setting the config, I can see Mongodb service and pods up and running. I can also login to mongo shell with username/password that was set up in secrets. I can also successfully insert a sample document inside the mongo shell.
But when I stop the pod (or delete and apply mongodb.yaml), start again, then I don't see the same DB listed where I first created the sample document, and hence I don't find that sample document as well.
Here is my configuration
volume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongo_data"
mongodb.yaml
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: YWRtaW4=
MONGO_INITDB_ROOT_PASSWORD: YWRtaW4=
kind: Secret
metadata:
name: mongodb-secrets
type: Opaque
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mongo-claim0
name: mongo-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: mongo
name: mongo
spec:
serviceName: mongo
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secrets
key: MONGO_INITDB_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secrets
key: MONGO_INITDB_ROOT_PASSWORD
image: mongo
imagePullPolicy: ""
name: mongo
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: mongo-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: mongo-claim0
persistentVolumeClaim:
claimName: mongo-claim0
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
I don't see the mongo-claim0
PersistentVolumeClaim matching your mongo-pv
❓. In any case, add these ➕ to your PVC to change the PV reclaim policy to Retain
from Delete
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mongo-claim0
name: mongo-claim0
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual 👈
resources:
requests:
storage: 1Gi
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain 👈
volumeBindingMode: Immediate
Note: Local volumes are not supported officially.
These:
kubectl port-forward service/mongo 27017:27017
.
minikube service mongo --url
also works but will give you a random local port.
Since you have one replica you can use:
mongodb://username:password@mongo:27017/dbname_?
or
mongodb://username:password@mongo.<K8s-namespace>.svc.cluster.local:27017/dbname_?
✌️
You are using hostpath volumes and hostPath type volumes refer to directories on the Node (VM/machine) where your Pod is scheduled for running .So you'd need to create this directory at least on that Node. The possible reason for you not being able to find directory is your pod is scheduled on a different node To make sure your Pod is consistently scheduled on that specific Node you need to set spec.nodeSelector in the PodTemplate.
Currently your service is headless and can be accessed from the cluster only. Expose your service using
kubectl expose sts mongo --type=NodePort --port=xxx
and then do
minikube service mongo --url
Use the output to connect to your mongodb from Compass as this will give you an IP and Port to connect
"mongodb://mongo-0.mongo:27017/dbname_?"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With