For a topic with multiple partitions -
1) Does a single SpringBoot instance use multiple threads to process (method annotated with StreamListener) each message from each partition?
2) Is it possible to configure more than one thread for each partition or is that something I would have to manually hand off from my listener thread to a worker pool?
Kafka Consumer provides the basic functionalities to handle messages. Kafka Streams also provides real-time stream processing on top of the Kafka Consumer client. In this tutorial, we'll explain the features of Kafka Streams to make the stream processing experience simple and easy.
Spring Cloud Stream is a framework for building highly scalable event-driven microservices connected with shared messaging systems.
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic. The consumer group maps directly to the same Apache Kafka concept. Partitioning also maps directly to Apache Kafka partitions as well.
Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in an Apache Kafka® cluster. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology.
....consumer.concurrency
controls the number of threads (default 1).
The partitions are distributed across the threads. If you have 20 partitions and 4 threads; they'll get 5 partitions each.
You need to have at least as many partitions as the aggregate concurrency across all instances. (If you have 2 app instances and 5 threads each, you need at least 10 partitions).
You should not distribute messages from a single partition across multiple threads; the offset will be committed as soon as you hand off to the new thread and that could cause message loss.
You should always err on the side of having more partitions than you need concurrency.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With