Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Deploying PostgreSQL database with Kubernetes

I am confused when it comes to deploying PostgreSQL database of my Django application with Kubernetes. Here is how I have constructed my deployment-definition.yml file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres-container
  template:
    metadata:
      labels:
        app: postgres-container
        tier: backend
    spec:
      containers:
        - name: postgres-container
          image: postgres:9.6.6
          env:
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: user

            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-credentials
                  key: password

            - name: POSTGRES_DB
              value: agent_technologies_db
          ports:
            - containerPort: 5432
          volumeMounts:
            - name: postgres-volume-mount
              mountPath: /var/lib/postgresql/data

      volumes:
        - name: postgres-volume-mount
          persistentVolumeClaim:
            claimName: postgres-pvc
        - name: postgres-credentials
          secret:
            secretName: postgres-credentials

What I dont understand is this. If I specify (like i did) an existing image of PostgreSQL inside spec of Kubernetes Deployment object, how do I actually run my application? What do I need to specify as HOST inside my settings.py file?

Here is what my settings.py file looks like for now:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'agent_technologies_db',
        'USER': 'stefan_radonjic',
        'PASSWORD': 'cepajecar995',
        'HOST': 'localhost', 
        'PORT': '',
        }
}

It is constructed this way because I am still designing the application and I do not wanna deploy it to Kubernetes cluster just yet. But when I do, what am I suppose to specify for : HOST and PORT ? And also, is this the right way to deploy PostgreSQL to Kubernetes Cluster.

Thank you in advance!

*** QUESTION UPDATE ****

As suggested, I have created service.yml:

apiVersion: v1
kind: Service
metadata:
  name: postgres-service
spec:
  selector:
    app: postgres-container
    tier: backend
  ports:
    - protocol: TCP
      port: 5432
      targetPort: 5432
  type: ClusterIP

And I have updated my settings.py file:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'agent_technologies_db',
        'USER': 'stefan_radonjic',
        'PASSWORD': 'cepajecar995',
        'HOST': 'postgres-service', 
        'PORT': 5432,
        }
}

But I am getting the following error:

enter image description here

like image 251
Stefan Radonjic Avatar asked Dec 10 '22 06:12

Stefan Radonjic


2 Answers

In order to allow communication to your PostreSQL deployment in Kubernetes, you need to set up a Service object. If your Django app will live in the same cluster as your PostgreSQL deployment, then you will want a ClusterIP type service; otherwise, if your Django app lives outside of your cluster, you will want a LoadBalancer or NodePort type service.

There are two ways to create a service:

YAML

The first is through a yaml file, which in your case would look like this:

kind: Service
apiVersion: v1
metadata:
  name: postgres
spec:
  selector:
    app: postgres-container
    tier: backend
  ports:
  - name: postgres
    protocol: TCP
    port: 5432
    targetPort: 5432

The .spec.selector field defines the target of the Service. This service will target pods with labels app=postgres-container and tier=backend. It exposes port 5432 of the container. In your Django configuration, you would put the name of the service as the HOST: in this case, the name is simply postgres. Kubernetes will resolve the service name to the matching pod IP and route traffic to the pod. The port will be the port of the service: 5432.

kubectl expose

The other way of creating a service is through the kubectl expose command:

kubectl expose deployment/postgres

This command will default to a ClusterIP type service and expose the ports defined in the .spec.containers.ports fields in the Deployment yaml.

More info:

https://kubernetes.io/docs/concepts/services-networking/service/

And also, is this the right way to deploy PostgreSQL to Kubernetes Cluster.

This depends on a few variables. Do you plan on deploying a Postgres cluster? If so, you may want to look into using a StatefulSet:

StatefulSets are valuable for applications that require one or more of the following.

  • Stable, unique network identifiers.
  • Stable, persistent storage.
  • Ordered, graceful deployment and scaling.
  • Ordered, graceful deletion and termination.
  • Ordered, automated rolling updates.

https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#using-statefulsets

Do you have someone knowledgable of Postgres that's going to configure and maintain it? If not, I would also recommend that you look into deploying a managed Postgres server outside of a cluster (e.g. RDS). You can still deploy your Django app within the cluster and connect to your DB via an ExternalName service.

The reason I recommend this is that managing stateful applications in a Kubernetes cluster can be challenging. I'm not familiar with Postgres, but here's a cautionary tale of running Postgres on Kubernetes: https://gravitational.com/blog/running-postgresql-on-kubernetes/

In addition to that, here are a few experiences I've run into that has influenced my decision to remove stateful workloads from my cluster:

Stuck volumes

If you're using AWS EBS volumes, volumes can get "stuck" on a node and fail to detach and reattach to a new node if your DB pod gets rescheduled to a new node.

Migrating to a new cluster

If you ever need to move your workloads to a new cluster, you will have to deal with the added challenge of moving your state to the new cluster as well without suffering any data loss. If you move your stateful apps outside of the cluster then you can treat the whole cluster as cattle, and then tearing it down and migrating to a new cluster becomes a whole lot easier.

More info:

K8s blog post on deploying Postgres with StatefulSets: https://kubernetes.io/blog/2017/02/postgresql-clusters-kubernetes-statefulsets/

like image 136
erstaples Avatar answered Dec 18 '22 07:12

erstaples


You have 2 cases.

1) Your application runs inside the kubernetes cluster.

You need to reference your postgres pod through a service.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: postgres-container
    tier: backend
  name: postgres
spec:
  ports:
  - port: 5432
    protocol: TCP
  selector:
    app: postgres
  sessionAffinity: None
  type: ClusterIP

Then write postgres when you need to specify your postgres_host.

2) Your application runs outside the kubernetes cluster.

In this case you have to provide a way to enter inside the cluster from outside. Or through a LoadBalancer, or through a Ingress. In this case too you have to create a Service (see point 1).

enter image description here

I write an example with ingress.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tutorial
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: my_kube.info
    http:
      paths:
      - path: /
        backend:
          serviceName: postgres-container
          servicePort: 5432

my_kube.info (or whatever you choose as name) must be resolvable (DNS or write a line in /etc/hosts).

If you need a HA postgres manager, you may take a look at: http://stolon.io/

like image 30
Nicola Ben Avatar answered Dec 18 '22 05:12

Nicola Ben