My spring-boot application(consumer)
process messages from Apache Kafka
. Periodically, massage can't process and consumer throw exception. Consumer commits offset anyway.
Can I distinguish success messages from failure messages in Kafka? I think, I can't. Is it true? If it is true, I have the main question:
How can I retry failure messages? I know some ways but I'm not sure of their correctness.
1) Change offset to early. But in this way success messages will retry too.
2) When I catch an exception, I send this message to another topic(error-topic for example). But it looks difficult.
3) Something else(your variant)
By throwing the Retryable exception rather than using the Kafka client library to retry, the message is not marked as consumed and is re-delivered in the next poll. The poll will not time out, so the message will not be duplicated.
The source code for a Dead Letter Queue implementation contains a try-cath block to handle expected or unexpected exceptions. The message is processed if no error occurs. Send the message to a dedicated DLQ Kafka topic if any exception occurs. The failure cause should be added to the header of the Kafka message.
You can deal with failed transient sends in several ways: Drop failed messages. Exert backpressure further up the application and retry sends. Send all messages to alternative local storage, from which they will be ingested into Kafka asynchronously.
The retries setting determines how many times the producer will attempt to send a message before marking it as failed. The default values are: 0 for Kafka <= 2.0. MAX_INT, i.e., 2147483647 for Kafka >= 2.1.
If you want at-least once guarantee, a general pattern is as follows:
enable.auto.commit
to false)For each message:
Repeat
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With