I often get Timeout exceptions due to various reasons in my Kafka producer. I am using all the default values for producer config currently.
I have seen following Timeout exceptions:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for topic-1-0: 30001 ms has passed since last append
I have following questions:
What are the general causes of these Timeout exceptions?
what are the general guidelines to handling the Timeout exception?
Are Timeout exceptions retriable exceptions and is it safe to retry them?
I am using Kafka v2.1.0 and Java 11.
Thanks in advance.
Resolution. The default timeout is 1 minute, to change it, open the Kafka Client Configuration > Producer tab > Advance Properties > add max.block.ms and set to desired value (in milliseconds).
You can deal with failed transient sends in several ways: Drop failed messages. Exert backpressure further up the application and retry sends. Send all messages to alternative local storage, from which they will be ingested into Kafka asynchronously.
A Kafka Producer has a pool of buffer that holds to-be-sent records. The producer has background, I/O threads for turning records into request bytes and transmitting requests to Kafka cluster. The producer must be closed to not leak resources, i.e., connections, thread pools, buffers.
"What are the general causes of these Timeout exceptions?"
The most common cause that I saw earlier was due to staled metadata information: one broker went down, and the topic partitions on that broker were failed over to other brokers. However, the topic metadata information has not been updated properly, and the client still tries to talk to the failed broker to either get metadata info, or to publish the message. That causes timeout exception.
Netwowrk connectivity issues. This can be easily diagnosed with telnet broker_host borker_port
The broker is overloaded. This can happen if the broker is saturated with high workload, or hosts too many topic partitions.
To handle the timeout exceptions, the general practice is:
Rule out broker side issues. make sure that the topic partitions are fully replicated, and the brokers are not overloaded
Fix host name resolution or network connectivity issues if there are any
Tune parameters such as request.timeout.ms
, delivery.timeout.ms
etc. My past experience was that the default value works fine in most of the cases.
The default Kafka config values, both for producers and brokers, are conservative enough that, under general circumstances, you shouldn't run into any timeouts. Those problems typically point to a flaky/lossy network between the producer and the brokers.
The exception you're getting, Failed to update metadata
, usually means one of the brokers is not reachable by the producer, and the effect is that it cannot get the metadata.
For your second question, Kafka will automatically retry to send messages that were not fully ack'ed by the brokers. It's up to you if you want to catch and retry when you get a timeout on the application side, but if you're hitting 1+ min timeouts, retrying is probably not going to make much of a difference. You're going to have to figure out the underlying network/reachability problems with the brokers anyway.
In my experience, usually the network problems are:
nc -z broker-ip 9092
from the server running the producer)If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With