As far as I understand after reading Kafka Streams documentation, it's not possible to use it for streaming data from only one partition from given topic, one always have to read it whole.
Is that correct?
If so, are there any plans to provide such an option to the API in the future?
the default partitioner for java uses a hash of the record's key to choose the partition or uses a round-robin strategy if the record has no key.
Ordering Guarantee with Apache Kafka “Apache Kafka preserves the order of messages within a partition. This means that if messages were sent from the producer in a specific order, the broker will write them to a partition in that order and all consumers will read them in that order.”
Video streaming service aggregators are search aggregation services that scan multiple video streaming sites to list movies and TV shows available on various platforms. These sites do not host the content but instead provide links to the service provider sites, directly to the movie or show page in many instances.
Using the right partitioning strategies allows your application to handle terabytes of data at scale with minimal latency. A Kafka producer can write to different partitions in parallel, which generally means that it can achieve higher levels of throughput.
No you can't do that because the internal consumer subscribes to the topic joining a consumer group which is specified through the application-id so the partitions are assigned automatically. Btw why do you want do that ? Without re-balancing you lose the scalability feature provided by Kafka Stream because just adding/removing instances of your streaming application you can scale the entire process, thanks to the re-balancing on partitions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With