Is it possible in Spring Kafka configure the number of partitions for the specific topic in order to be able to effectively use org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory.setConcurrency(Integer)
method to parallel consumers on this topic in order to speed up message consumptions and processing? If so, could you please show the example of how it can be done.
If you want to change the number of partitions or replicas of your Kafka topic, you can use a streaming transformation to automatically stream all of the messages from the original topic into a new Kafka topic that has the desired number of partitions or replicas.
For most implementations you want to follow the rule of thumb of 10 partitions per topic, and 10,000 partitions per Kafka cluster. Going beyond that amount can require additional monitoring and optimization. (You can learn more about Kafka monitoring here.)
A rough formula for picking the number of partitions is based on throughput. You measure the throughout that you can achieve on a single partition for production (call it p) and consumption (call it c). Let's say your target throughput is t. Then you need to have at least max(t/p, t/c) partitions.
No. If you want to use fewer partitions, delete the corresponding topic, create another one, and specify the desired number of partitions.
See Configuring Topics.
@Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
}
Will create a topic foo
with 10 partitions and a replication factor of 2 (if there is a KafkaAdmin
bean in the application context).
Spring boot auto-configures a KafkaAdmin
@Bean
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With