Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does Avro schema evolution require access to both old and new schemas?

If I serialize an object using a schema version 1, and later update the schema to version 2 (say by adding a field) - am I required to use schema version 1 when later deserializing the object? Ideally I would like to just use schema version 2 and have the deserialized object have the default value for the field that was added to the schema after the object was originally serialized.

Maybe some code will explain better...

schema1:

{"type": "record",
 "name": "User",
 "fields": [
  {"name": "firstName", "type": "string"}
 ]}

schema2:

{"type": "record",
 "name": "User",
 "fields": [
  {"name": "firstName", "type": "string"},
  {"name": "lastName", "type": "string", "default": ""}
 ]}

using the generic non-code-generation approach:

// serialize
ByteArrayOutputStream out = new ByteArrayOutputStream();
Encoder encoder = EncoderFactory.get().binaryEncoder(out, null);
GenericDatumWriter writer = new GenericDatumWriter(schema1);
GenericRecord datum = new GenericData.Record(schema1);
datum.put("firstName", "Jack");
writer.write(datum, encoder);
encoder.flush();
out.close();
byte[] bytes = out.toByteArray();

// deserialize
// I would like to not have any reference to schema1 below here
DatumReader<GenericRecord> reader = new GenericDatumReader<GenericRecord>(schema2);
Decoder decoder = DecoderFactory.get().binaryDecoder(bytes, null);
GenericRecord result = reader.read(null, decoder);

results in an EOFException. Using the jsonEncoder results in an AvroTypeException.

I know it will work if I pass both schema1 and schema2 to the GenericDatumReader constructor, but I'd like to not have to keep a repository of all previous schemas and also somehow keep track of which schema was used to serialize each particular object.

I also tried the code-gen approach, first serializing to a file using the User class generated from schema1:

User user = new User();
user.setFirstName("Jack");
DatumWriter<User> writer = new SpecificDatumWriter<User>(User.class);
FileOutputStream out = new FileOutputStream("user.avro");
Encoder encoder = EncoderFactory.get().binaryEncoder(out, null);
writer.write(user, encoder);
encoder.flush();
out.close();

Then updating the schema to version 2, regenerating the User class, and attempting to read the file:

DatumReader<User> reader = new SpecificDatumReader<User>(User.class);
FileInputStream in = new FileInputStream("user.avro");
Decoder decoder = DecoderFactory.get().binaryDecoder(in, null);
User user = reader.read(null, decoder);

but it also results in an EOFException.

Just for comparison's sake, what I'm trying to do seems to work with protobufs...

format:

option java_outer_classname = "UserProto";
message User {
    optional string first_name = 1;
}

serialize:

UserProto.User.Builder user = UserProto.User.newBuilder();
user.setFirstName("Jack");
FileOutputStream out = new FileOutputStream("user.data");
user.build().writeTo(out);

add optional last_name to format, regen UserProto, and deserialize:

FileInputStream in = new FileInputStream("user.data");
UserProto.User user = UserProto.User.parseFrom(in);

as expected, user.getLastName() is the empty string.

Can something like this be done with Avro?

like image 836
bils Avatar asked Aug 28 '12 18:08

bils


People also ask

Is Avro schema backward compatibility?

Backward Compatibility To support this kind of use case, we can evolve the schemas in a backward compatible way: data encoded with the old schema can be read with the newer schema. Avro has a set of rules on what changes are allowed in the new schema for it to be backward compatible.

Does Avro require schema?

Avro not only requires a schema during data serialization, but also during data deserialization.

How does Avro schema work?

Avro SchemasIt allows every data to be written with no prior knowledge of the schema. It serializes fast and the resulting serialized data is lesser in size. Schema is stored along with the Avro data in a file for any further processing. In RPC, the client and the server exchange schemas during the connection.


2 Answers

Avro and Protocol Buffers have different approaches to handling versioning, and which approach is better depends on your use case.

In Protocol Buffers you have to explicitly tag every field with a number, and those numbers are stored along with the fields' values in the binary representation. Thus, as long as you never change the meaning of a number in a subsequent schema version, you can still decode a record encoded in a different schema version. If the decoder sees a tag number that it doesn't recognise, it can simply skip it.

Avro takes a different approach: there are no tag numbers, instead the binary layout is completely determined by the program doing the encoding — this is the writer's schema. (A record's fields are simply stored one after another in the binary encoding, without any tagging or separator, and the order is determined by the writer's schema.) This makes the encoding more compact, and saves you from having to manually maintain tags in the schema. But it does mean that for reading, you have to know the exact schema with which the data was written, or you won't be able to make sense of it.

If knowing the writer's schema is essential for decoding Avro, the reader's schema is a layer of niceness on top of it. If you're doing code generation in a program that needs to read Avro data, you can do the codegen off the reader's schema, which saves you from having to regenerate it every time the writer's schema changes (assuming it changes in a way that can be resolved). But it doesn't save you from having to know the writer's schema.

Pros & Cons

Avro's approach is good in an environment where you have lots of records that are known to have the exact same schema version, because you can just include the schema in the metadata at the beginning of the file, and know that the next million records can all be decoded using that schema. This happens a lot in a MapReduce context, which explains why Avro came out of the Hadoop project.

Protocol Buffers' approach is probably better for RPC, where individual objects are being sent over the network (as request parameters or return value). If you use Avro here, you may have different clients and different servers all with different schema versions, so you'd have to tag every binary-encoded blob with the Avro schema version it's using, and maintain a registry of schemas. At that point you might as well have used Protocol Buffers' built-in tagging.

like image 183
Martin Kleppmann Avatar answered Sep 23 '22 21:09

Martin Kleppmann


To do what you are trying to do you need to make the last_name field optional, by allowing null values. The type for last_name should be ["null", "string"] instead of "string"

like image 23
user2348700 Avatar answered Sep 19 '22 21:09

user2348700