Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kafka consumer offsets out of range with no configured reset policy for partitions

I'm receiving exception when start Kafka consumer.

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}

I'm using Kafka version 9.0.0 with Java 7.

like image 644
basit raza Avatar asked May 19 '16 10:05

basit raza


People also ask

Does Kafka reset offset?

You can choose either to reset the position to the “earliest” offset or the “latest” offset (the default).

Are Kafka offsets per partition?

Kafka maintains a numerical offset for each record in a partition. This offset acts as a unique identifier of a record within that partition, and also denotes the position of the consumer in the partition.

How does a consumer commit offsets in Kafka?

The Kafka consumer commits the offset periodically when polling batches, as described above. This strategy works well if the message processing is synchronous and failures handled gracefully. Be aware that starting Quarkus 1.9, auto commit is disabled by default. So you need to explicitly enable it.

What happens when Kafka consumer throws exception?

When an exception is thrown while consuming message number 2. Then messages 3 to 9 are skipped. And the next message to be processed is 10 (the first message in the next poll loop)


2 Answers

So you are trying to access offset(29898318) in topic(test) partition(0) which is not available right now.

There could be two cases for this

  1. Your topic partition 0 may not have those many messages
  2. Your message at offset 29898318 might have already deleted by retention period

To avoid this you can do one of following:

  1. Set auto.offset.reset config to either earliest or latest . You can find more info regarding this here
  2. You can get smallest offset available for a topic partition by running following Kafka command line tool

command:

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2

Hope this helps!

like image 112
avr Avatar answered Sep 19 '22 06:09

avr


I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:

  • cleanup.policy=compact,delete
  • retention of 4 days

If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)

Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.

like image 34
Tim Van Laer Avatar answered Sep 19 '22 06:09

Tim Van Laer