We use kafka for our message queue,Our business required that message timestamp must with the same order with the offset, that means: if there are message m1 and message m2, and (m1.timestamp
Defining Kafka Consumer OffsetThe consumer offset is a way of tracking the sequential order in which messages are received by Kafka topics. Keeping track of the offset, or position, is important for nearly all Kafka use cases and can be an absolute necessity in certain instances, such as financial services.
Timestamps Drive the Action in Kafka Streams The default implementation of TimeStampExtractor is FailOnInvalidTimestamp , which means that if you get a timestamp less than zero, it will throw an exception.
OFFSET IN KAFKA In other words, it is a position within a partition for the next message to be sent to a consumer. A simple integer number which is actually used by Kafka to maintain the current position of a consumer.
Kafka also has ordering guarantees which are handled mainly by Kafka's partitioning and the fact that partitions are append-only immutable logs. Events are written to a particular partition in the order they were sent, and consumers read those events in the same order.
It depends on the timestamp type used, there are two types:
CreateTime
- timestamp is assigned when producer record is created, so before sending. There can be retries, so there is no guarantee that ordering is preserved.LogAppendTime
- timestamp is assigned when record is appended to the log on the broker. In that case ordering per partition is preserved. Multiple messages might get the same timestamp assigned.By default, CreateTime
is used. To change this, set log.message.timestamp.type
for broker or message.timestamp.type
for particular topic.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With