I am running a KOPS Kubernetes cluster on AWS, trying to make kubernetes-kafka example work with an Elastic Load Balancer. Here is the external services portion for two of the brokers:
kind: Service
apiVersion: v1
metadata:
name: outside-0
namespace: kafka
spec:
selector:
app: kafka
kafka-broker-id: "0"
ports:
- protocol: TCP
targetPort: 9094
port: 32400
nodePort: 32400
type: NodePort
---
kind: Service
apiVersion: v1
metadata:
name: outside-1
namespace: kafka
spec:
selector:
app: kafka
kafka-broker-id: "1"
ports:
- protocol: TCP
targetPort: 9094
port: 32401
nodePort: 32401
type: NodePort
Here is my attempt to expose those brokers via an ELB (actual FQDN replaced with my.copmany.com
).
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations:
dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
type: LoadBalancer
ports:
- port: 32400
name: outside
targetPort: 32400
selector:
app: outside-0
---
apiVersion: v1
kind: Service
metadata:
name: kafka-1
annotations:
dns.alpha.kubernetes.io/external: kafka-1.kafka.my.company.com
spec:
type: LoadBalancer
ports:
- port: 32401
name: outside
targetPort: 32401
selector:
app: outside-1
Looking at AWS ELB console shows 0 of 3 instances available for each of the Kafka ELB brokers and producing to kafka-1.kafka.my.company.com:9094
using Kafka command line client times out. How can outside-0
NodePort service be exposed via kafka-0
LoadBalancer service? Or are there other approached to be considered?
Kakfa is very particular about clients needing direct access to the server who is the leader of a topic. To achieve this, I had done the following:
1) Setup the configmap to dynamically override values for the advertised.listeners based on the pod's ordinal value
POD_ID=${HOSTNAME##*-}
kafka-server-start.sh server.properties \
--override advertised.listeners=INSIDE://`hostname -f`:9092,OUTSIDE://kafka-${POD_ID}.kafka.my.company.com:9094 \
--override broker.id=${POD_ID} \
--override listeners=INSIDE://:9092,OUTSIDE://:9094
2) Create a LoadBalancer service for each Kafka pod. Change the selector to match your kafka-pod-id.
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations:
dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 9094
name: outside
targetPort: 9094
selector:
app: kafka
kafka-pod-id: "0"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-1
annotations:
dns.alpha.kubernetes.io/external: kafka-1.kafka.my.company.com
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 9094
name: outside
targetPort: 9094
selector:
app: kafka
kafka-pod-id: "1"
These Kubernetes configurations look correct. However, if the AWS console says "0 of 3 instances available" that usually means that you're failing the ELB healthcheck. The ELB will drop any traffic if there's not a healthy backend instance available to send the data to, which would explain the calls to Kafka timing out.
An easy ELB healthcheck for nodeport services is just SSH to see if the instance itself is alive, since kube-proxy on that instance is what is actually going to forward the traffic to the right instance. If you're only running one listener on the ELB, then you could actually check that port in your healthcheck. (I often run a bunch of nodeport listeners per ELB instead of one per nodeport service to save money.)
According to the documentation (Kuebrnetes Service Types):
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
You should not be defining separate services for NodePort and LoadBalancer types, but only LoadBalancer
with nodePort
specified (Please test and try to add/remove some options since I don't have an environment I could test this on):
apiVersion: v1
kind: Service
metadata:
name: kafka-0
spec:
type: LoadBalancer
ports:
- port: 32400
name: kafka
nodePort: 32400
selector:
app: kafka
kafka-broker-id: "0"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-1
spec:
type: LoadBalancer
ports:
- port: 32401
name: kafka
nodePort: 32401
selector:
app: kafka
kafka-broker-id: "1"
kubelet
, kube-apiserver
, kube-controller-manager/cloud-controller-manager
have cloud configuration options.The target port is not set correctly, you have to use the container port and a NodePort will be automatically assigned. The configuration looks like this:
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations:
dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
type: LoadBalancer
ports:
- port: 9094
name: outside
targetPort: 9094
selector:
app: outside-0
---
apiVersion: v1
kind: Service
metadata:
name: kafka-1
annotations:
dns.alpha.kubernetes.io/external: kafka-1.kafka.my.company.com
spec:
type: LoadBalancer
ports:
- port: 9094
name: outside
targetPort: 9094
selector:
app: outside-1
The external port can be anything you want, for example you can use 9094, same as container's port and it can be the same port for all services, because you use different ELBs. Just make sure the selectors are set correctly and this should work fine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With