I'm using Kafka schema registry for producing/consuming Kafka messages, for example I have two fields they are both string type, the pseudo schema as following:
{"name": "test1", "type": "string"}
{"name": "test2", "type": "string"}
but after sending and consuming a while, I need modify schema to change the second filed to long type, then it threw the following exception:
Schema being registered is incompatible with an earlier schema; error code: 409
I'm confused, if schema registry can not evolve the schema upgrade/change, then why should I use Schema registry, or say why I use Avro?
Schema Registry lives outside of and separately from your Kafka brokers. Your producers and consumers still talk to Kafka to publish and read data (messages) to topics. Concurrently, they can also talk to Schema Registry to send and retrieve schemas that describe the data models for the messages.
Even if you just want to test your application, your application needs to have access to Confluent Schema Registry to work. This is a test project in which is shown how to test Spring Kafka application without the need for Confluent Schema Registry.
Fields cannot be renamed in BACKWARD
compatibility mode. As a workaround you can change the compatibility rules for the schema registry.
According to the docs:
The schema registry server can enforce certain compatibility rules when new schemas are registered in a subject. Currently, we support the following compatibility rules.
Backward compatibility (default): A new schema is backward compatible if it can be used to read the data written in all previous schemas. Backward compatibility is useful for loading data into systems like Hadoop since one can always query data of all versions using the latest schema.
Forward compatibility: A new schema is forward compatible if all previous schemas can read data written in this schema. Forward compatibility is useful for consumer applications that can only deal with data in a particular version that may not always be the latest version.
Full compatibility: A new schema is fully compatible if it’s both backward and forward compatible.
No compatibility: A new schema can be any schema as long as it’s a valid Avro.
Setting compatibility
to NONE
should do the trick.
# Update compatibility requirements globally
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
--data '{"compatibility": "NONE"}' \
http://localhost:8081/config
And the response should be
{"compatibility":"NONE"}
I generally discourage setting compatibility to NONE
on a subject unless absolutely necessary.
If you need just the new schema and you don't need the previous schemas from schema registry, you can delete the older schemas as mentioned below :
I've tested this with confluent-kafka and it worked for me
curl -X DELETE http://localhost:8081/subjects/Kafka-value
curl -X DELETE http://localhost:8081/subjects/Kafka-value/versions/1
curl -X DELETE http://localhost:8081/subjects/Kafka-value/versions/latest
Ref: https://docs.confluent.io/platform/current/schema-registry/schema-deletion-guidelines.html
https://docs.confluent.io/current/avro.html You might need to add a "default": null.
You can also delete existing one and register the updated one.
You can simply append a default value like this.
{"name": "test3", "type": "string","default": null}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With