Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Intermittent WARN ConsumerCoordinator We received an assignment that doesn't match our current subscription

Why am I hitting the below WARN message intermittently when starting my Kafka Streams application?

Its more than a warning because it floods the application logs & the Kafka Streams app doesn't start.

Typically when I do a re-deploy it then works.

[my-app-0 my-app] 2020-03-25 14:00:12.931 WARN 1 --- [-StreamThread-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=my-app-b8f0b2a0-271b-4499-85bd-9e22d4a8b4b1-StreamThread-1-consumer, groupId=my-app] We received an assignment [topic-one-0, topic-two-0] that doesn't match our current subscription Subscribe(topic-two); it is likely that the subscription has changed since we joined the group. Will try re-join the group with current subscription

After I restart the app the above WARN goes away & I get a different WARN, but at least the app works!

[my-app-0 my-app] 2020-03-25 14:05:54.300 WARN 1 --- [-StreamThread-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=my-app-b0f22dc1-479b-4f7c-a862-b20f70eedc35-StreamThread-1-consumer, groupId=my-app] The following subscribed topics are not assigned to any members: [topic-one]

like image 419
DarVar Avatar asked Dec 22 '22 18:12

DarVar


1 Answers

The first message indicates that a consumer got a partition from a topic assigned that it did not subscribe to. This can happen if you spin up multiple application instances using the same application.id (and thus group.id) but both application subscribe to different topics: this is not allowed in Kafka Streams; all instances with the same application.id need to subscribe to the exact some topics and execute the exact same topology.

The second message indicates, that a consumer within a group did subscribe to a topic, but some topic partitions were not assigned to any consumer is the group. This can happen due to incremental rebalancing (as introduced in Kafka Streams 2.4): Before a topic partition is reassigned to a different client, it will first be only unassigned from the old client to allow the client to cleanup resources etc. In a consecutive rebalance the topic partition should be assigned to the new client. Hence, as long as the WARN does not persist (i.e., repeat over multiple rebalances) and a consecutive rebalance happens, it is expected behavior.

like image 96
Matthias J. Sax Avatar answered May 11 '23 11:05

Matthias J. Sax