To solve my type mismatch problem discussed in this thread I created custom Deserializers
and added them to ObjectMapper
. However the performance deteriorates significantly with this.
With default deserializer i get 1-2 Garbage collection calls in logcat
while with custom deserializer there are at least 7-8 GC calls, and hence the processing time is also increase significantly.
My Deserializer :
public class Deserializer<T> {
public JsonDeserializer<T> getDeserializer(final Class<T> cls) {
return new JsonDeserializer<T> (){
@Override
public T deserialize(JsonParser jp, DeserializationContext arg1) throws IOException, JsonProcessingException {
JsonNode node = jp.readValueAsTree();
if (node.isObject()) {
return new ObjectMapper().convertValue(node, cls);
}
return null;
}
};
}
}
And I am using this to add to Mapper
public class DeserializerAttachedMapper<T> {
public ObjectMapper getMapperAttachedWith(final Class<T> cls , JsonDeserializer<T> deserializer) {
ObjectMapper mapper = new ObjectMapper();
SimpleModule module = new SimpleModule(deserializer.toString(), new Version(1, 0, 0, null, null, null));
module.addDeserializer(cls, deserializer);
mapper.registerModule(module);
return mapper;
}
}
EDIT: Added extra data
My JSON is of considerable size but not huge: I have pasted it here
Now for parsing the same JSON if i use this code:
String response = ConnectionManager.doGet(mAuthType, url, authToken);
FLog.d("location object response" + response);
// SimpleModule module = new SimpleModule("UserModule", new Version(1, 0, 0, null, null, null));
// JsonDeserializer<User> userDeserializer = new Deserializer<User>().getDeserializer(User.class);
// module.addDeserializer(User.class, userDeserializer);
ObjectMapper mapper = new ObjectMapper();
// mapper.registerModule(module);
JsonNode tree = mapper.readTree(response);
Integer code = Integer.parseInt(tree.get("code").asText().trim());
if(Constants.API_RESPONSE_SUCCESS_CODE == code) {
ExploreLocationObject locationObject = mapper.convertValue(tree.path("response").get("locationObject"), ExploreLocationObject.class);
FLog.d("locationObject" + locationObject);
FLog.d("locationObject events" + locationObject.getEvents().size());
return locationObject;
}
return null;
Then my logcat is like this
But if I use this code for same JSON
String response = ConnectionManager.doGet(mAuthType, url, authToken);
FLog.d("location object response" + response);
SimpleModule module = new SimpleModule("UserModule", new Version(1, 0, 0, null, null, null));
JsonDeserializer<User> userDeserializer = new Deserializer<User>().getDeserializer(User.class);
module.addDeserializer(User.class, userDeserializer);
ObjectMapper mapper = new ObjectMapper();
mapper.registerModule(module);
JsonNode tree = mapper.readTree(response);
Integer code = Integer.parseInt(tree.get("code").asText().trim());
if(Constants.API_RESPONSE_SUCCESS_CODE == code) {
ExploreLocationObject locationObject = mapper.convertValue(tree.path("response").get("locationObject"), ExploreLocationObject.class);
FLog.d("locationObject" + locationObject);
FLog.d("locationObject events" + locationObject.getEvents().size());
return locationObject;
}
return null;
Then my logcat is like this
How big is the object? Code basically builds a tree model (sort of dom tree), and that will take something like 3x-5x as much memory as the original document. So I assume your input is a huge JSON document.
You can definitely write a more efficient version using Streaming API. Something like:
JsonParser jp = mapper.getJsonFactory().createJsonParser(input);
JsonToken t = jp.nextToken();
if (t == JsonToken.START_OBJECT) {
return mapper.readValue(jp, classToBindTo);
}
return null;
it is also possible to implement this with data-binding (as JsonDeserializer
), but it gets bit complicated just because you want to delegate to "default" deserializer.
To do this, you would need to implement BeanDeserializerModifier
, and replace standard deserializer when "modifyDeserializer" is called: your own code can retain reference to the original deserializer and delegate to it, instead of using intermediate tree model.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With