Kafka Streams state stores are "compact" by default. Is it possible to set "compact,delete" with a retention policy in a state store?
You can use either of these methods: The API method KafkaStreams#cleanUp() in your application code. Manually delete the corresponding local state directory (default location: /var/lib/kafka-streams/<application.id> ).
There are two cleanup policies: log.cleanup.policy=delete. This is the default for all the user topics. With this policy configured for a topic, Kafka deletes events older than the configured retention time. The default retention period is a week.
As point 1 if having just a producer producing message we don't need Kafka Stream. If consumer messages from one Kafka cluster but publish to different Kafka cluster topics. In that case, you can even use Kafka Stream but have to use a separate Producer to publish messages to different clusters.
Just like relational OLTP (online transactional processing) platforms, Kafka can now act as permanent database stores for transactional data.
Yes, it is possible to configure topics with retention and compaction and Kafka Streams uses this setting for windowed KTable
s.
If you really want to set this, you can update the corresponding changelog topic config manually after it is created.
However, setting topic retention time for changelog topics deletes the data only from the topic. Data is not deleted in the local state store. State stores don't offer TTL and RocksDBs TTL setting cannot be enabled (for technical reasons that we hope to resolve eventually).
If you want to delete data cleanly, you should use tombstone messages that will delete the data from the store as well as the changelog topic (instead of using retention time).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With