Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kafka Schema Registry error: Failed to write Noop record to kafka store

Tags:

apache-kafka

I am trying to start kafka schema registry but getting following error: Failed to write Noop record to kafka store. Stack trace is below. I checked connections to zookeeper, kafka brokers - everything is fine. I can post messages to kafka. I was trying to delete _schema topic and even reinstall kafka, but this issue still happening. Yesterday everything was working fine, but today , after restarting my vagrant box, this issue come up. Is there anything I can do about it? Thanks

[2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: 
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
schema.registry.zk.namespace = schema_registry
kafkastore.topic = _schemas
avro.compatibility.level = none
shutdown.graceful.ms = 1000
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = 12bac2a9529f
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = master.mesos:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)

[2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: 
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
schema.registry.zk.namespace = schema_registry
kafkastore.topic = _schemas
avro.compatibility.level = none
shutdown.graceful.ms = 1000
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = 12bac2a9529f
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = master.mesos:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2015-11-19 19:12:26,535] INFO Initialized the consumer offset to -1        (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2015-11-19 19:12:27,167] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic.   (io.confluent.kafka.schemaregistry.storage.KafkaStore:172)
[2015-11-19 19:12:27,262] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2015-11-19 19:13:27,350] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at   io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:104)
at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:363)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:220)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
... 5 more
like image 455
eugened Avatar asked Nov 19 '15 19:11

eugened


1 Answers

The error message is misleading, as recommended by other developers on other posts, i would suggest the following.

1) Ensure that Zookeeper is running. (check log files and if the process is active)

2) Ensure that the various nodes in your kafka cluster can communicate with one another (telnet to the host and port)

3) If both 1 and 2 are fine then I do not recommend creating another topic(like _schema2 as recommended by some people on other posts) and updating the schemaregistry configuration file kafkastore.topic with the new topic.
Instead 3.1) Stop the processes (zookeeper, kafka server) 3.2) cleanup the data in the zookeeper data directory 3.3) restart zookeeper,kafka server and finally the schemaregistry services (it should work!)

P.S: If you do try to create another topic, then you might get stuck when you are trying to consume data from a kafka topic.(Happened to me, took me a few hours to figure this out!!!).

like image 145
Nithya Avatar answered Oct 03 '22 05:10

Nithya