Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

EventStore on Kubernetes: Connection refused

I'm developing an open-sourced cloud event gateway in .NET 5.0, backed by an EventStore channel, and am facing problems to connect the ProjectionsManager service.

I deployed an EventStore service in its own namespace, and can successfully connect to it, and subscribe to streams. However, when I try to connect the ProjectionsManager, I get the following exception:

Connection refused (eventstore.eventstore.svc.cluster.local:2113)

The fully qualified name of the service, 'eventstore.eventstore.svc.cluster.local', is correct and is used successfully by the IEventStoreConnection. The port, 2113, is correct too, for I am able to access the Admin UI by port-forwarding with Kubectl to my pod on that port.

What's going on? On all my local and docker-compose based tests, all works as expected. Only in Kubernetes do I face this problem.

Here's the content of my EventStore yaml file:

apiVersion: v1
kind: Namespace
metadata:
  name: eventstore
  labels:
    name: eventstore

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: eventstore
  namespace: eventstore
  labels:
    app: eventstore
spec:
  serviceName: eventstore
  replicas: 1
  selector:
    matchLabels:
      app: eventstore
  template:
    metadata:
      labels:
        app: eventstore
    spec:
      containers:
        - name: eventstore
          image: eventstore/eventstore:release-5.0.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 1112
              name: tcp-int
            - containerPort: 1113
              name: tcp-ext
            - containerPort: 2112
              name: http-int  
            - containerPort: 2113
              name: http-ext  
          volumeMounts:
            - name: data
              mountPath: /var/lib/eventstore
          env:
            - name: EVENTSTORE_EXT_HTTP_PORT
              value: "2113"
            - name: EVENTSTORE_EXT_TCP_PORT
              value: "1113"
            - name: EVENTSTORE_INT_HTTP_PREFIXES
              value: http://*:2112/
            - name: EVENTSTORE_EXT_HTTP_PREFIXES
              value: http://*:2113/
            - name: EVENTSTORE_RUN_PROJECTIONS
              value: All
            - name: EVENTSTORE_START_STANDARD_PROJECTIONS
              value: "true"
            - name: EVENTSTORE_EXT_IP
              value: "0.0.0.0"
            - name: EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP
              value: "true"
            - name: EVENTSTORE_ENABLE_EXTERNAL_TCP
              value: "true"
      volumes:
        - name: data
          emptyDir: {}

---

apiVersion: v1
kind: Service
metadata:
  name: eventstore
  namespace: eventstore
  labels:
    app: eventstore
spec:
  ports:
    - port: 1112
      name: tcp-int
    - port: 1113
      name: tcp-ext
    - port: 2112
      name: http-int  
    - port: 2113
      name: http-ext  
  selector:
    app: eventstore

Here is the C# snippet used to instantiate the ProjectionsManager:

new ProjectionsManager(new ConsoleLogger(), new DnsEndPoint("eventstore.eventstore.svc.cluster.local", 2113), TimeSpan.FromMilliseconds(3000), httpSchema: "http");

By the way, the service that is trying to connect the ProjectionsManager is coupled with an Istio sidecar, if that matters at all.

Thanks in advance for your precious help ;)

EDIT

It seems that Istio sidecar injection is the cause of the issue. Disabling it makes it work as expected. Any idea on why this is happening and on how to solve it with injection enabled?

like image 267
Charles d'Avernas Avatar asked Nov 25 '20 15:11

Charles d'Avernas


People also ask

How to fix kubectl get nodes refused?

# kubectl get nodes The connection to the server 192.168.80.181:6443 was refused - did you specify the right host or port? 1. First of all, change the IP address in all the files under /etc/kubernetes/ with your new IP of the master server and worker nodes.

How to fix Kubernetes kubectl error on master server?

Note: On worker node, you will only see “/etc/kubernetes/kubelet.conf” file. 2. Now, change the IP address on the file $HOME/.kube/config on your master server using your favorite editor. This is a very important step and if you forget this one, the master server will keep on giving you the error when you run kubectl command. 3.

What is open Kubernetes and how does it work?

Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. In this short tutorial, we will know how to troubleshoot and fix the connection issue once Kubernetes master node IP has changed.

Why is Kubernetes not working in my lab environment?

This issue can appear if you are not using DHCP or DNS server in your lab environment and every time you reboot the system, you might face this issue, so always use permanent IPs in your environment. Just for your reference, we were using the below Kubernetes version and getting the below error.


Video Answer


1 Answers

We encountered the same issue running EventStoreDB on a Kubernetes cluster with Istio sidecar injection enabled.

According to Istio's documentation on protocol selection, Istio will look at the name of the port you defined on your Service, and this will decide which protocol Istio is trying to intercept. If the format is not respected, Istio will try to guess the protocol (works for HTTP, HTTPS and gRPC).

In your case, your ports' name started with http- (http-int and http-ext). Therefore, Istio will not try to detect the protocol used, but instead will assume that the protocol is http (HTTP/1.1).

However, EventStoreDB's API is a gRPC endpoint. Therefore, you have two options:

  • Rename the port to start with grpc-. In that case any Istio proxy will know that this port is exposing gRPC
  • Name the port somethig else directly (like api or eventstoredb for example), to let Istio detect the protocol used.

Note that EventStoreDB exposes a admin web interface on the same port, which is HTTP. If you are accessing it through port-forwarding, then you have no Istio sidecar in the way, so the port's name will not influence the traffic. But if you try to expose the admin interface through an Istio Ingress Gateway (which I wouldn't recommend since you would be exposing your database to the Internet), then you might have issues accessing the admin interface. In that case, the second solution to let Istio detect the traffic is probably a more flexible solution.

A last option would be to expose 2 ports on the Service, one for http and the other for grpc, and have them both redirect to the same port on the Pod, but I'm actually not sure if this is allowed by Kubernetes.

like image 151
Thibault Henry Avatar answered Sep 22 '22 12:09

Thibault Henry