Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to decide Kafka Cluster size

Tags:

I am planning to decide on how many nodes should be present on Kafka Cluster. I am not sure about the parameters to take into consideration. I am sure it has to be >=3 (with replication factor of 2 and failure tolerance of 1 node).

Can someone tell me what parameters should be kept in mind while deciding the cluster size and how they effect the size.

I know of following factors but don't know how it quantitatively effects the cluster size. I know how it qualitatively effect the cluster size. Is there any other parameter which effects cluster size? 1. Replication factor (cluster size >= replication factor) 2. Node failure tolerance. (cluster size >= node-failure + 1)

What should be cluster size for following scenario while consideration of all the parameters 1. There are 3 topics. 2. Each topic has messages of different size. Message size range is 10 to 500kb. Average message size being 50kb. 3. Each topic has different partitions. Partitions are 10, 100, 500 4. Retention period is 7 days 5. There are 100 million messages which gets posted every day for each topic.

Can someone please point me to relevant documentation or any other blog which may discuss this. I have google searched it but to no avail

like image 717
puneet Avatar asked Jan 13 '15 13:01

puneet


1 Answers

As I understand, getting good throughput from Kafka doesn't depend only on the cluster size; there are others configurations which need to be considered as well. I will try to share as much as I can.

Kafka's throughput is supposed to be linearly scalabale with the numbers of disk you have. The new multiple data directories feature introduced in Kafka 0.8 allows Kafka's topics to have different partitions on different machines. As the partition number increases greatly, so do the chances that the leader election process will be slower, also effecting consumer rebalancing. This is something to consider, and could be a bottleneck.

Another key thing could be the disk flush rate. As Kafka always immediately writes all data to the filesystem, the more often data is flushed to disk, the more "seek-bound" Kafka will be, and the lower the throughput. Again a very low flush rate might lead to different problems, as in that case the amount of data to be flushed will be large. So providing an exact figure is not very practical and I think that is the reason you couldn't find such direct answer in the Kafka documentation.

There will be other factors too. For example the consumer's fetch size, compressions, batch size for asynchronous producers, socket buffer sizes etc.

Hardware & OS will also play a key role in this as using Kafka in a Linux based environment is advisable due to its pageCache mechanism for writing data to the disk. Read more on this here

You might also want to take a look at how OS flush behavior play a key role into consideration before you actually tune it to fit your needs. I believe it is key to understand the design philosophy, which makes it so effective in terms of throughput and fault-tolerance.

Some more resource I find useful to dig in

  • https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines
  • http://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/
  • https://grey-boundary.io/load-testing-apache-kafka-on-aws/
  • https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing
like image 188
user2720864 Avatar answered Nov 03 '22 01:11

user2720864