Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to mix record with map in Avro?

Tags:

avro

I'm dealing with server logs which are JSON format, and I want to store my logs on AWS S3 in Parquet format(and Parquet requires an Avro schema). First, all logs have a common set of fields, second, all logs have a lot of optional fields which are not in the common set.

For example, the follwoing are three logs:

{ "ip": "172.18.80.109", "timestamp": "2015-09-17T23:00:18.313Z", "message":"blahblahblah"}
{ "ip": "172.18.80.112", "timestamp": "2015-09-17T23:00:08.297Z", "message":"blahblahblah", "microseconds": 223}
{ "ip": "172.18.80.113", "timestamp": "2015-09-17T23:00:08.299Z", "message":"blahblahblah", "thread":"http-apr-8080-exec-1147"}

All of the three logs have 3 shared fields: ip, timestamp and message, some of the logs have additional fields, such as microseconds and thread.

If I use the following schema then I will lose all additional fields.:

{"namespace": "example.avro",
 "type": "record",
 "name": "Log",
 "fields": [
     {"name": "ip", "type": "string"},
     {"name": "timestamp",  "type": "String"},
     {"name": "message", "type": "string"}
 ]
}

And the following schema works fine:

{"namespace": "example.avro",
 "type": "record",
 "name": "Log",
 "fields": [
     {"name": "ip", "type": "string"},
     {"name": "timestamp",  "type": "String"},
     {"name": "message", "type": "string"},
     {"name": "microseconds", "type": [null,long]},
     {"name": "thread", "type": [null,string]}
 ]
}

But the only problem is that I don't know all the names of optional fields unless I scan all the logs, besides, there will new additional fields in future.

Then I think out an idea that combines record and map:

{"namespace": "example.avro",
 "type": "record",
 "name": "Log",
 "fields": [
     {"name": "ip", "type": "string"},
     {"name": "timestamp",  "type": "String"},
     {"name": "message", "type": "string"},
     {"type": "map", "values": "string"}  // error
 ]
}

Unfortunately this won't compile:

java -jar avro-tools-1.7.7.jar compile schema example.avro .

It will throw out an error:

Exception in thread "main" org.apache.avro.SchemaParseException: No field name: {"type":"map","values":"long"}
    at org.apache.avro.Schema.getRequiredText(Schema.java:1305)
    at org.apache.avro.Schema.parse(Schema.java:1192)
    at org.apache.avro.Schema$Parser.parse(Schema.java:965)
    at org.apache.avro.Schema$Parser.parse(Schema.java:932)
    at org.apache.avro.tool.SpecificCompilerTool.run(SpecificCompilerTool.java:73)
    at org.apache.avro.tool.Main.run(Main.java:84)
    at org.apache.avro.tool.Main.main(Main.java:73)

Is there a way to store JSON strings in Avro format which are flexible to deal with unknown optional fields?

Basically this is a schema evolution problem, Spark can deal with this problem by Schema Merging. I'm seeking a solution with Hadoop.

like image 907
soulmachine Avatar asked Sep 18 '15 00:09

soulmachine


1 Answers

The map type is a "complex" type in avro terminology. The below snippet works:

{
  "namespace": "example.avro",
  "type": "record",
  "name": "Log",
  "fields": [
    {"name": "ip", "type": "string"},
    {"name": "timestamp",  "type": "string"},
    {"name": "message", "type": "string"},
    {"name": "additional", "type": {"type": "map", "values": "string"}}
  ]
}
like image 170
oakad Avatar answered Nov 26 '22 03:11

oakad