Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to slow down or set given speed on the Kafka stream consumer?

I am trying to control number of messages which are consumed by the KStream and I am not very succesful.

I am using: max.poll.interval.ms=100 and max.poll.records=20 to get like 200 messages per second.

But it seems to be not very good, as I see that there are like 500 messages per second also in my statistics.

What else shall I set on the side of the stream consumer?

like image 557
Seweryn Habdank-Wojewódzki Avatar asked Jun 29 '17 06:06

Seweryn Habdank-Wojewódzki


2 Answers

I am using: max.poll.interval.ms=100 and max.poll.records=20 to get like 200 messages per second.

max.poll.interval.ms and max.poll.records properties do not work this way.

max.poll.interval.ms indicates the maximum time interval in milliseconds the consumer has to wait in between each consumer poll of the topic.

max.poll.records indicates the maximum number of records the consumer can consume during each consumer poll of the topic.

The interval between each poll is not controlled by the above two properties but by the time taken by your consumer to acknowledge the fetched records.

For example, let's say a topic X exists with 1000 records in it, and the time taken by the consumer to acknowledge the fetched records is 20ms. With max.poll.interval.ms = 100 and max.poll.records = 20, the consumer will poll the Kafka topic every 20ms and in every poll, max of 20 records will be fetched. In case, the time taken to acknowledge the fetched records is greater than the max.poll.interval.ms, the polling will be considered as failed and that particular batch will re-polled again from the Kafka topic.

like image 161
Daniccan Avatar answered Sep 22 '22 02:09

Daniccan


A KafkaConsumer (also the one that is internally used by KafkaStreams reads record as fast as possible.

The parameter you mention can have an impact on performance, but you cannot control the actual data rate. Also note, that max.poll.records only configures how many records poll() return, but it has no impact on client-broker communication. A KafkaConsumer can fetch more records when talking to the broker, and then return buffered messages on poll() as long as records are in the buffer (ie, for this case, poll() is a client-side operator that only ensures that you don't timeout via max.poll.interval.ms). Thus, you might be more interested in fetch.max.bytes, that determines the size of bytes fetches from the broker. If you reduce this parameter, the consumer is less efficient and thus throughput should decrease. (it's not recommended though).

Another way to configure throughput are quotas (https://kafka.apache.org/documentation/#design_quotas) It's a broker side configuration that allows you limit the amount of data a client can read and/or write.

The best thing to do in Kafka Streams (and also when using a plain KafkaConsumer) is to throttle calls to poll() manually. For Kafka Streams, you can add a Thread.sleep() into any UDF. If you don't want to piggyback this into an existing operator, you can just add an foreach() with ephemeral state (ie, a class member variable) to track the throughput and compute how much you need to sleep to throttle the throughput accordingly.

like image 45
Matthias J. Sax Avatar answered Sep 22 '22 02:09

Matthias J. Sax