Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can't connect to MariaDB by hostname within a Kubernetes cluster

If I set up MariaDB form the official image within a Docker Compose configuration, I can access it by its host name - for example if in a bash shell within the MariaDB container:

# host db
db has address 172.21.0.2


# curl telnet://db:3306
Warning: Binary output can mess up your terminal. Use "--output -" to tell 
Warning: curl to output it to your terminal anyway, or consider "--output 
Warning: <FILE>" to save to a file.
  • no connection refused issue here

But if have MariaDB deployed from the official image within a Kubernetes cluster (tried both MicroK8s and GKE), I can connect to it via localhost but not by its host name:

# host db
db.my-namspace.svc.cluster.local has address 10.152.183.124

# curl telnet://db:3306
curl: (7) Failed to connect to db port 3306: Connection refused

# curl telnet://localhost:3306
Warning: Binary output can mess up your terminal. Use "--output -" to tell 
Warning: curl to output it to your terminal anyway, or consider "--output 
Warning: <FILE>" to save to a file.
  • connection is refused for the service host name, but localhost responds

I've tried to replace the included my.cnf with a simplified version like:

[mysqld]
skip-grant-tables
skip-networking=0
#### Unix socket settings (making localhost work)
user            = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock

#### TCP Socket settings (making all remote logins work)
port         = 3306
bind-address = *
  • with no luck

The MariaDB Kubernetes deployment is like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: db
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      name: db
  template:
    metadata:
      labels:
        name: db
    spec:
      containers:
      - env:
        - name: MYSQL_PASSWORD
          value: template
        - name: MYSQL_ROOT_PASSWORD
          value: root
        - name: MYSQL_USER
          value: template
        image: mariadb:10.4
        name: db
        ports:
        - containerPort: 3306
        resources: {}
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: dbdata
      restartPolicy: Always
      volumes:
      - name: dbdata
        persistentVolumeClaim:
          claimName: dbdata
status: {}

and the corresponding Persistent Volume Claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    io.kompose.service: dbdata
  name: dbdata
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
status: {}

It baffles me that the same configuration works with Docker Compose but not within a Kubernetes cluster.

Any ideas what may be going on?

Update 2020-03-18 I forgot to include the service declaration for the database and add it here:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: db
  name: db
spec:
  ports:
  - name: "3306"
    port: 3306
    targetPort: 3306
  selector:
    app: db
    name: db
  type: ClusterIP
status:
  loadBalancer: {}

...am including both app and name for the spec.selector - am used to having only name but @Al-waleed Shihadeh's example includes app so I'll include that also, just in case - but without success.

Here are outputs from a couple of kubectl listing commands:

$ sudo microk8s.kubectl get svc db -n my-namespace
NAME   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
db     ClusterIP   10.152.183.246   <none>        3306/TCP   35m
$ sudo microk8s.kubectl get pods -owide -n my-namespace
NAME                           READY   STATUS             RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
db-77cbcf87b6-l44lm            1/1     Running            0          34m   10.1.48.118   microk8s-vm   <none>           <none>

Solution Comparing the service declaration posted by KoopaKiller, which proved to work, I finally noticed that setting the protocol attribute to "TCP" in the ports declaration was missing - this part:

spec:
  ports:
  - protocol: TCP
...
like image 836
Bjorn Thor Jonsson Avatar asked Mar 17 '20 19:03

Bjorn Thor Jonsson


2 Answers

Since you are using Kubernetes Deployment, the name of your pods will be generated dinamically based on the name you gave in spec file, in your example, the pods will be create with the name db-xxxxxxxxxx-xxxxx.

In order to make a 'fixed' hostname, you need to create a service for reach your pods, example:

apiVersion: v1
kind: Service
metadata:
  name: db
spec:
  selector:
    name: db
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
  type: ClusterIP

And to check if was successfully deployed:

$ kubectl get svc db
NAME   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
db     ClusterIP   10.96.218.18   <none>        3306/TCP   89s

The fullname of your service will be: <name>.<namespace>.cluster.local in this case using default namespace will be db.default.cluster.local pointing to ip 10.96.218.18 as showed in example above.

To reach your service you need to configure your /etc/hosts with his information:

echo -ne "10.96.218.18\tdb.default.cluster.local db db.default" >> /etc/hosts

After that you will be able to reach your service by dns:

$ dig +short db
10.96.218.18

$ mysql -h db -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 5.5.5-10.4.12-MariaDB-1:10.4.12+maria~bionic mariadb.org binary distribution

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

Just for you know, you could also use HELM template to setup a mariadb with replication. See this article

References:

https://kubernetes.io/docs/concepts/services-networking/service/

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

like image 166
Mr.KoopaKiller Avatar answered Sep 19 '22 00:09

Mr.KoopaKiller


to be able to access the service from the host node you need to define a service object in Kubernetes

so the complete k8s objects should look like the below snippet PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: db-data
  name: db-data
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
status: {}

Service

apiVersion: v1
kind: Service
metadata:
  labels:
    app: mysql
  name: mysql
spec:
  ports:
  - port: 3306
    targetPort: 3306
  selector:
    app: mysql
  type: ClusterIP

Deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: mysql
  name: mysql
spec:
  replicas: 1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: dummy
        - name: MYSQL_DATABASE
          value: community_db
        resources: {}
        volumeMounts:
          - mountPath: /var/lib/mysql
            name: db-data
        image: mysql:5.7
        ports:
        - containerPort: 3306
      volumes:
      - name: db-data
        persistentVolumeClaim:
          claimName: db-data
      restartPolicy: Always
like image 24
Al-waleed Shihadeh Avatar answered Sep 19 '22 00:09

Al-waleed Shihadeh