Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to deploy MongoDB with persistent volume in Kubernetes?

I am trying to setup MongoDB as a standalone on minikube with Persistent Volume using basic auth. On setting the config, I can see Mongodb service and pods up and running. I can also login to mongo shell with username/password that was set up in secrets. I can also successfully insert a sample document inside the mongo shell.

But when I stop the pod (or delete and apply mongodb.yaml), start again, then I don't see the same DB listed where I first created the sample document, and hence I don't find that sample document as well.

  1. Can I get feedback on whether I am not setting up the volumes correctly to persist the data in mongo outside the life of pods?
  2. If the username/password is admin/admin, how can I connect to mongo from MongoDB Compass from my mac where minikube is running?
  3. If the username/password is admin/admin, how can I connect to mongo from another node.js application running on the same cluster?

Here is my configuration

volume.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
 name: mongo-pv
 labels:
  type: local
spec:
 storageClassName: manual
 capacity:
  storage: 1Gi
 accessModes:
  - ReadWriteOnce
 hostPath:
  path: "/mnt/mongo_data"

mongodb.yaml

apiVersion: v1
data:
  MONGO_INITDB_ROOT_USERNAME: YWRtaW4=
  MONGO_INITDB_ROOT_PASSWORD: YWRtaW4=
kind: Secret
metadata:
  name: mongodb-secrets
type: Opaque
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: mongo-claim0
  name: mongo-claim0
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: mongo
  name: mongo
spec:
  serviceName: mongo
  replicas: 1
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
      - env:
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-secrets
              key: MONGO_INITDB_ROOT_USERNAME
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb-secrets
              key: MONGO_INITDB_ROOT_PASSWORD
        image: mongo
        imagePullPolicy: ""
        name: mongo
        ports:
        - containerPort: 27017
        resources: {}
        volumeMounts:
        - mountPath: /data/db
          name: mongo-claim0
      restartPolicy: Always
      serviceAccountName: ""  
      volumes:
      - name: mongo-claim0
        persistentVolumeClaim:
          claimName: mongo-claim0
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: mongo
  name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    app: mongo
like image 532
maopuppets Avatar asked Jan 25 '23 21:01

maopuppets


2 Answers

  1. I don't see the mongo-claim0 PersistentVolumeClaim matching your mongo-pv ❓. In any case, add these ➕ to your PVC to change the PV reclaim policy to Retain from Delete:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      labels:
        app: mongo-claim0
      name: mongo-claim0
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: manual 👈
      resources:
        requests:
          storage: 1Gi
    
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: manual
    provisioner: k8s.io/minikube-hostpath
    reclaimPolicy: Retain 👈
    volumeBindingMode: Immediate
    

    Note: Local volumes are not supported officially.

  2. These:

    kubectl port-forward service/mongo 27017:27017.

    minikube service mongo --url also works but will give you a random local port.

  3. Since you have one replica you can use:

    mongodb://username:password@mongo:27017/dbname_?
    

    or

    mongodb://username:password@mongo.<K8s-namespace>.svc.cluster.local:27017/dbname_?
    

✌️

like image 167
Rico Avatar answered Jan 27 '23 10:01

Rico


  1. You are using hostpath volumes and hostPath type volumes refer to directories on the Node (VM/machine) where your Pod is scheduled for running .So you'd need to create this directory at least on that Node. The possible reason for you not being able to find directory is your pod is scheduled on a different node To make sure your Pod is consistently scheduled on that specific Node you need to set spec.nodeSelector in the PodTemplate.

  2. Currently your service is headless and can be accessed from the cluster only. Expose your service using

kubectl expose sts mongo --type=NodePort --port=xxx

and then do

minikube service mongo --url

Use the output to connect to your mongodb from Compass as this will give you an IP and Port to connect

  1. For using it in Nodejs application in same cluster use the connection string as

"mongodb://mongo-0.mongo:27017/dbname_?"

like image 40
Tarun Khosla Avatar answered Jan 27 '23 10:01

Tarun Khosla