Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I send large messages with Kafka (over 15MB)?

People also ask

How can I send large messages with Kafka over 15mb?

We can store the large messages in a file at the shared storage location and send the location through Kafka message. This can be a faster option and has minimum processing overhead. Another option could be to split the large message into small messages of size 1KB each at the producer end.

Can Kafka handle large messages?

The Kafka max message size is 1MB. In this lesson we will look at two approaches for handling larger messages in Kafka. Kafka has a default limit of 1MB per message in the topic. This is because very large messages are considered inefficient and an anti-pattern in Apache Kafka.

What is the maximum size of data for Kafka?

Out of the box, the Kafka brokers can handle messages up to 1MB (in practice, a little bit less than 1MB) with the default configuration settings, though Kafka is optimized for small messages of about 1K in size.

How many messages can Kafka handle?

Also, since Aiven Kafka services are offered only over encrypted TLS connections, we included the configuration for these, namely the required certificates and keys. librdkafka defaults to a maximum batch size of 10000 messages or to a maximun request size of one million bytes per request, whichever is met first.


You need to adjust three (or four) properties:

  • Consumer side:fetch.message.max.bytes - this will determine the largest size of a message that can be fetched by the consumer.
  • Broker side: replica.fetch.max.bytes - this will allow for the replicas in the brokers to send messages within the cluster and make sure the messages are replicated correctly. If this is too small, then the message will never be replicated, and therefore, the consumer will never see the message because the message will never be committed (fully replicated).
  • Broker side: message.max.bytes - this is the largest size of the message that can be received by the broker from a producer.
  • Broker side (per topic): max.message.bytes - this is the largest size of the message the broker will allow to be appended to the topic. This size is validated pre-compression. (Defaults to broker's message.max.bytes.)

I found out the hard way about number 2 - you don't get ANY exceptions, messages, or warnings from Kafka, so be sure to consider this when you are sending large messages.


Minor changes required for Kafka 0.10 and the new consumer compared to laughing_man's answer:

  • Broker: No changes, you still need to increase properties message.max.bytes and replica.fetch.max.bytes. message.max.bytes has to be equal or smaller(*) than replica.fetch.max.bytes.
  • Producer: Increase max.request.size to send the larger message.
  • Consumer: Increase max.partition.fetch.bytes to receive larger messages.

(*) Read the comments to learn more about message.max.bytes<=replica.fetch.max.bytes


The answer from @laughing_man is quite accurate. But still, I wanted to give a recommendation which I learned from Kafka expert Stephane Maarek. We actively applied this solution in our live systems.

Kafka isn’t meant to handle large messages.

Your API should use cloud storage (for example, AWS S3) and simply push a reference to S3 to Kafka or any other message broker. You'll need to find a place to save your data, whether it can be a network drive or something else entirely, but it shouldn't be a message broker.

If you don't want to proceed with the recommended and reliable solution above,

The message max size is 1MB (the setting in your brokers is called message.max.bytes) Apache Kafka. If you really needed it badly, you could increase that size and make sure to increase the network buffers for your producers and consumers.

And if you really care about splitting your message, make sure each message split has the exact same key so that it gets pushed to the same partition, and your message content should report a “part id” so that your consumer can fully reconstruct the message.

If the message is text-based try to compress the data, which may reduce the data size, but not magically.

Again, you have to use an external system to store that data and just push an external reference to Kafka. That is a very common architecture and one you should go with and widely accepted.

Keep that in mind Kafka works best only if the messages are huge in amount but not in size.

Source: https://www.quora.com/How-do-I-send-Large-messages-80-MB-in-Kafka


The idea is to have equal size of message being sent from Kafka Producer to Kafka Broker and then received by Kafka Consumer i.e.

Kafka producer --> Kafka Broker --> Kafka Consumer

Suppose if the requirement is to send 15MB of message, then the Producer, the Broker and the Consumer, all three, needs to be in sync.

Kafka Producer sends 15 MB --> Kafka Broker Allows/Stores 15 MB --> Kafka Consumer receives 15 MB

The setting therefore should be:

a) on Broker:

message.max.bytes=15728640 
replica.fetch.max.bytes=15728640

b) on Consumer:

fetch.message.max.bytes=15728640

You need to override the following properties:

Broker Configs($KAFKA_HOME/config/server.properties)

  • replica.fetch.max.bytes
  • message.max.bytes

Consumer Configs($KAFKA_HOME/config/consumer.properties)
This step didn't work for me. I add it to the consumer app and it was working fine

  • fetch.message.max.bytes

Restart the server.

look at this documentation for more info: http://kafka.apache.org/08/configuration.html