I'm currently working on a project using Java Spring Boot with Apache Kafka. We have multiple microservices communicating through Kafka, and our system is designed to process around 100,000 events (logs) per second (EPS),and each packet size is around 3 KB. However, I'm facing a significant performance issue. I've tried several Kafka and Spring Boot configuration optimizations, but the issue still persists.
One critical observation:
When I include the Kafka producer logic in my service, the overall performance drops drastically. But when I comment out the Kafka producer code, the processing becomes very fast. This clearly points to the Kafka producer being a bottleneck, but I'm not sure what specifically is causing the issue — whether it's improper configuration, blocking I/O, synchronous sending, or something else.
Has anyone experienced a similar issue? Any help or insights would be greatly appreciated.
This is my kafka producer configuration:
KafkaProducerConfig.java
Bean(name = "packetData")
public KafkaProducer<String, PacketData> packetDataKafkaProducer() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaURLConfiguration.getKafkaURL());
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
config.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, 600000);
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 200000);
config.put(ProducerConfig.LINGER_MS_CONFIG, 10);
config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, COMPRESSION_TYPE);
return new KafkaProducer<>(config, new StringSerializer(), new JsonSerializer<>());
}
ProducerService.java
ProducerRecord<String, Object> records = new ProducerRecord<>(driver.getCollectionTopic(), packetData);
kafkaProducer.send(records);
Two main factors and another related one, as I see it:
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 200_000);
That is a very, very high batch size. Unless your messages are extremely large, the producer might wait a long time to fill a batch, causing latency.
Try lowering it to a more typical value like 32KB:
config.put(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(32 * 1024));
or let it be the default value, which is 16KB.
config.put(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, 600000); // 10 minutes
This is way too high for a high-throughput, low-latency system. 10 minutes for a timeout...Set this closer to 2–5 seconds (2000–5000 ms), for example.
KafkaProducer in SBBypasses the conveniences and optimizations that SpringBoot provides through KafkaTemplate.
Manually creating a KafkaProducer means you're responsible for all of that yourself, and it’s easy to introduce inefficiencies or threading issues, especially under heavy load.
From the official Spring documentation:
"KafkaTemplate provides high-level operations to send data to Kafka topics. It is the preferred way to interact with Kafka from Spring Boot applications, managing producers and handling exceptions internally."
https://docs.spring.io/spring-kafka/docs/3.0.0/reference/html/
https://medium.com/@AlexanderObregon/integrating-spring-boot-with-kafka-for-messaging-8bd07e76b038
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With