Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kafka-connect, Bootstrap broker disconnected


Im trying to setup Kafka Connect with the intent of running a ElasticsearchSinkConnector.
The Kafka-setup, consisting of 3 brokers secured using Kerberos, SSL and and ACL.

So far Ive been experimenting with running the connect-framework and the elasticserch-server localy using docker/docker-compose (Confluent docker-image 5.4 with Kafka 2.4) connecting to the remote kafka-installation (Kafka 2.0.1 - actually our production environement).

KAFKA_OPTS: -Djava.security.krb5.conf=/etc/kafka-connect/secrets/krb5.conf
      CONNECT_BOOTSTRAP_SERVERS: srv-kafka-1.XXX.com:9093,srv-kafka-2.XXX.com:9093,srv-kafka-3.XXX.com:9093
      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: user-grp
      CONNECT_CONFIG_STORAGE_TOPIC: test.internal.connect.configs
      CONNECT_OFFSET_STORAGE_TOPIC: test.internal.connect.offsets
      CONNECT_STATUS_STORAGE_TOPIC: test.internal.connect.status
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_ZOOKEEPER_CONNECT: srv-kafka-1.XXX.com:2181,srv-kafka-2.XXX.com:2181,srv-kafka-3.XXX.com:2181
      CONNECT_SECURITY_PROTOCOL: SASL_SSL
      CONNECT_SASL_KERBEROS_SERVICE_NAME: "kafka"
      CONNECT_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required \
                                useKeyTab=true \
                                storeKey=true \
                                keyTab="/etc/kafka-connect/secrets/kafka-connect.keytab" \
                                principal="<principal>;
      CONNECT_SASL_MECHANISM: GSSAPI
      CONNECT_SSL_TRUSTSTORE_LOCATION: <path_to_truststore.jks>
      CONNECT_SSL_TRUSTSTORE_PASSWORD: <PWD>


When starting the connect-framework everything seem to work fine, I can see logs claiming that the kerberos authentication is successfull etc.

The problem comes when I try to start a connect-job using curl.

curl -X POST -H "Content-Type: application/json" --data '{ "name": "kafka-connect", "config": { "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "tasks.max": 1, "topics": "test.output.outage", "key.ignore": true, "connection.url": "http://elasticsearch1:9200", "type.name": "kafka-connect" } }' http://localhost:8083/connectors

The job seem to startup without issues but as soon as it is about to start consuming from the kafka-topic I get:

kafka-connect     | [2020-04-06 10:35:33,482] WARN [Consumer clientId=connector-consumer-user-grp-2-0, groupId=connect-user-2] Bootstrap broker srv-kafka-1.XXX.com:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

repeted in the connect-log for all brokers.

What is the nature of this issue? Comunication with the brokers seem to work well - the connect-job is communicated back to the kafka as intended and when the connect-framework is restarted the job seem to resume as intended (even though still faulty).

Anyone have an idea what might be causing this? or how I should go about to debug it.

Since it is our production-environment I have only a limited possibility to change the server-configuration. But from what I can tell nothing in the logs seems to indicate there is something wrong.

Thanks in advance

like image 865
Jiinxy Avatar asked Apr 06 '20 13:04

Jiinxy


People also ask

What happens if Kafka broker is down?

During a broker outage, all partition replicas on the broker become unavailable, so the affected partitions' availability is determined by the existence and status of their other replicas. If a partition has no additional replicas, the partition becomes unavailable.

Does Kafka client connect to all brokers?

The client will likely need to maintain a connection to multiple brokers, as data is partitioned and the clients will need to talk to the server that has their data. However it should not generally be necessary to maintain multiple connections to a single broker from a single client instance (i.e. connection pooling).

How do I know if Kafka broker is running?

One of the quickest ways to find out if there are active brokers is by using Zookeeper's dump command. The dump command is one of the 4LW commands available to administer a Zookeeper server. On executing the command, we see a list of ephemeral broker ids registered with the Zookeeper server.


Video Answer


1 Answers

Per docs, you need to also configure security on the consumer/producer for the connector(s) that Kafka Connect is running. You do this by adding a consumer/producer prefix. So since you're using Docker, and the error suggests that you were creating a sink connector (i.e. requiring a consumer), add to your config:

  CONNECT_CONSUMER_SECURITY_PROTOCOL: SASL_SSL
  CONNECT_CONSUMER_SASL_KERBEROS_SERVICE_NAME: "kafka"
  CONNECT_CONSUMER_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required \
                            useKeyTab=true \
                            storeKey=true \
                            keyTab="/etc/kafka-connect/secrets/kafka-connect.keytab" \
                            principal="<principal>;
  CONNECT_CONSUMER_SASL_MECHANISM: GSSAPI
  CONNECT_CONSUMER_SSL_TRUSTSTORE_LOCATION: <path_to_truststore.jks>
  CONNECT_CONSUMER_SSL_TRUSTSTORE_PASSWORD: <PWD>

If you're also creating a source connector you'll need to replicate the above but for PRODUCER_ too

like image 137
Robin Moffatt Avatar answered Oct 24 '22 15:10

Robin Moffatt