I have some json data that looks like this:
{
"id": 1998983092,
"name": "Test Name 1",
"type": "search string",
"creationDate": "2017-06-06T13:49:15.091+0000",
"lastModificationDate": "2017-06-28T14:53:19.698+0000",
"lastModifiedUsername": "[email protected]",
"lockedQuery": false,
"lockedByUsername": null
}
I am able to add the lockedQuery null value to a GenericRecord object without issue.
GenericRecord record = new GenericData.Record(schema);
if(json.isNull("lockedQuery")){
record.put("lockedQuery", null);
}
However, later when I attempt to write that GenericRecord object to an avro file I get a null pointer exception.
File file = new File("~/test.arvo");
DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<>(schema);
DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<>(datumWriter);
dataFileWriter.create(schema, file);
for(GenericRecord record: masterList) {
dataFileWriter.append(record); // NULL POINTER HERE
}
When I run that code I get the following exception. Any tips on how to process a null value into an Avro file much appreciated. Thanks in advance.
java.lang.NullPointerException: null of boolean in field lockedQuery of
com.mydomain.test1.domain.MyAvroRecord
Exception in thread "main" java.lang.RuntimeException:
org.apache.avro.file.DataFileWriter$AppendWriteException:
java.lang.NullPointerException: null of boolean in field lockedQuery of
com.mydomain.test1.domain.MyAvroRecord
at com.mydomain.avro.App.main(App.java:198)
Caused by: org.apache.avro.file.DataFileWriter$AppendWriteException:
java.lang.NullPointerException: null of boolean in field lockedQuery of
com.mydomain.test1.domain.MyAvroRecord
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
EDIT: here is the MyAvroRecord
public class MyAvroRecord {
long id;
String name;
String type;
Date timestamp;
Date lastModifcationDate;
String lastModifiedUsername;
Boolean lockedQuery;
To model the set of fields within a schema, Avro supports the following primitive types: null: No value. boolean: Binary value.
Avro stores the data definition in JSON format making it easy to read and interpret; the data itself is stored in binary format making it compact and efficient. Avro files include markers that can be used to split large data sets into subsets suitable for Apache MapReduce processing.
Avro has a JSON like data model, but can be represented as either JSON or in a compact binary form. It comes with a very sophisticated schema description language that describes data. We think Avro is the best choice for a number of reasons: It has a direct mapping to and from JSON.
Avro Schemas It allows every data to be written with no prior knowledge of the schema. It serializes fast and the resulting serialized data is lesser in size. Schema is stored along with the Avro data in a file for any further processing.
To be able to set Avro field to null
you should allow this in Avro schema, by adding null
as one of the possible types of the field. Take a look on example from Avro documentation:
{
"type": "record",
"name": "MyRecord",
"fields" : [
{"name": "userId", "type": "long"}, // mandatory field
{"name": "userName", "type": ["null", "string"]} // optional field
]
}
here userName
is declared as composite type which could be either null
or string
. This kind of definition allows to set userName
field to null. As contrast userId
can only contain long values, hence attempt to set userId
to null will result in NullPointerException
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With