Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do Kafka consumers output INVALID_FETCH_SESSION_EPOCH after updating to 1.1?

We recently updated our Kafka brokers and clients to 1.1.1. Since the upgrade we periodically see INFO log entries such as

INFO Jun 08 08:30:20.335 61161458 [KafkaRecordConsumer-0] org.apache.kafka.clients.FetchSessionHandler [Consumer clientId=consumer-1, groupId=group_60_10] Node 3 was unable to process the fetch request with (sessionId=819759315, epoch=145991): INVALID_FETCH_SESSION_EPOCH.

I see that this message comes from the changes introduced in KIP-227: Introduce Incremental FetchRequests To Increase Partition Stability. However, I cannot find any detailed information about why this message would appear or what parameters might have to be tuned after its introduction.

So far it doesn't seem to have an impact on consumer behaviour with respect to receiving records (except introducing additional latency) but I would like to understand

  1. Why is the message being logged?
  2. What can be done to stop it being logged?
like image 632
Mark Avatar asked Jun 08 '18 13:06

Mark


People also ask

Why my Kafka consumer is slow?

Kafka Consumers If there are way too many producers writing data to the same topic when there are a limited number of consumers then then the reading processes will always be slow.

What happens when Kafka consumer goes down?

If the consumer crashes or is shut down, its partitions will be re-assigned to another member, which will begin consumption from the last committed offset of each partition. If the consumer crashes before any offset has been committed, then the consumer which takes over its partitions will use the reset policy.

How can we increase Kafka consumer throughput?

Increasing the number of partitions and the number of brokers in a cluster will lead to increased parallelism of message consumption, which in turn improves the throughput of a Kafka cluster; however, the time required to replicate data across replica sets will also increase.

How do you reduce consumer lag in Kafka?

Consuming concurrency can increase performance. If you store offsets on the zookeeper, it can be bottleneck. Reduce commits of offset and use dedicated zookeeper if possible. The best solution is storing offsets on brokers.


1 Answers

This was a race condition in Kafka - KAFKA-8052.

It will be fixed in the 2.3.0 release.

like image 82
Mark Avatar answered Sep 19 '22 23:09

Mark