I'm receiving exception when start Kafka consumer.
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}
I'm using Kafka version 9.0.0 with Java 7.
You can choose either to reset the position to the “earliest” offset or the “latest” offset (the default).
Kafka maintains a numerical offset for each record in a partition. This offset acts as a unique identifier of a record within that partition, and also denotes the position of the consumer in the partition.
The Kafka consumer commits the offset periodically when polling batches, as described above. This strategy works well if the message processing is synchronous and failures handled gracefully. Be aware that starting Quarkus 1.9, auto commit is disabled by default. So you need to explicitly enable it.
When an exception is thrown while consuming message number 2. Then messages 3 to 9 are skipped. And the next message to be processed is 10 (the first message in the next poll loop)
So you are trying to access offset(29898318
) in topic(test
) partition(0
) which is not available right now.
There could be two cases for this
0
may not have those many messages29898318
might have already deleted by retention periodTo avoid this you can do one of following:
auto.offset.reset
config to either earliest
or latest
.
You can find more info regarding this here
smallest offset
available for a topic partition by
running following Kafka command line toolcommand:
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2
Hope this helps!
I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:
cleanup.policy=compact,delete
If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)
Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With