Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.

I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.

I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.

My firewall is setup to accept inbound connection to 1433 port.

My SQL Server database is configured to allow remote access.

Here is the connection string I use:

"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"

And here is the file I use to deploy my web application:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: <private_repo_url>/webapp:db
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        - containerPort: 443
        - containerPort: 1433
      imagePullSecrets:
      - name: gitlab-auth
      volumes:
      - name: secrets
        secret:
          secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
  name: webapp
  labels:
    app: webapp
spec:
  type: NodePort
  selector:
    app: webapp  
  ports:
  - name: port-80
    port: 80
    targetPort: 80
    nodePort: 30080
  - name: port-443
    port: 443
    targetPort: 443
    nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
  name: sql-server
  labels:
    app: webapp
spec:
  ports:
    - name: port-1433
      port: 1433
      targetPort: 1433
---
apiVersion: v1  
kind: Endpoints  
metadata: 
  name: sql-server
  labels:
    app: webapp
subsets: 
  - addresses: 
    - ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
    ports: 
      - port: 1433

So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:

> kubectl describe svc webapp
Name:                     webapp
Namespace:                default
Labels:                   app=webapp
Annotations:              <none>
Selector:                 app=webapp
Type:                     NodePort
IP:                       10.108.225.112
Port:                     port-80  80/TCP
TargetPort:               80/TCP
NodePort:                 port-80  30080/TCP
Endpoints:                172.17.0.4:80
Port:                     port-443  443/TCP
TargetPort:               443/TCP
NodePort:                 port-443  30443/TCP
Endpoints:                172.17.0.4:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

> kubectl describe svc sql-server
Name:              sql-server
Namespace:         default
Labels:            app=webapp
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.107.142.32
Port:              port-1433  1433/TCP
TargetPort:        1433/TCP
Endpoints:
Session Affinity:  None
Events:            <none>

> kubectl describe endpoints webapp
Name:         webapp
Namespace:    default
Labels:       app=webapp
Annotations:  <none>
Subsets:
  Addresses:          172.17.0.4
  NotReadyAddresses:  <none>
  Ports:
    Name      Port  Protocol
    ----      ----  --------
    port-443  443   TCP
    port-80   80    TCP

Events:  <none>

> kubectl describe endpoints sql-server
Name:         sql-server
Namespace:    default
Labels:       app=webapp
Annotations:  <none>
Subsets:
  Addresses:          172.24.144.1
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  1433  TCP

Events:  <none>

I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:

SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)

I am new with Kubernetes and I am not very comfortable with networking so any help is welcome. The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...

Thank you!

like image 228
Bliamoh Avatar asked Dec 27 '18 12:12

Bliamoh


People also ask

How do you resolve the connection to the server localhost 8080 was refused Did you specify the right host or port?

This is a common issue when you run the kubectl command or a similar command. In most cases, Kubernetes does not have the correct credentials to access the cluster. It can be easily resolved easily by setting an environment variable in most similar cases.

How connect SQL to Kubernetes?

To access a Cloud SQL instance from an application running in Google Kubernetes Engine, you can use either the Cloud SQL Auth proxy (with public or private IP), or connect directly using a private IP address. The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP.


1 Answers

What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.

subsets: 
  - addresses: 
    - ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
    ports: 
      - port: 1433

You can connect via the DNS entry host.docker.internal Read more here and here for windows

I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.

If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.

PS : I am using the kubernetes that go on with docker now, seems easier to work with.

like image 110
Stefan Georgiev Avatar answered Sep 30 '22 04:09

Stefan Georgiev