I have a C# application that needs to deserialize many thousands of protobuf messages per second. In the interest of avoiding unnecessary garbage collections, I'm wondering if there is a way to use pre-allocated memory so that each deserialization operation wouldn't need to allocate new memory.
What I envision is that I would allocate a pool of message objects prior to execution, and then instruct the protobuf code to use the next available message from this pool for each deserialization.
Does this functionality exist, or is there some other way to optimize memory use in this scenario?
Thanks!
Yes, there is! Internally, it already uses a micro-pool to avoid allocating too many working buffers, but if you are putting through enough objects that GC is an issue, you can perhaps use your own allocation scheme, and create a custom object factory; this cannot be specified on the attributes currently, but can be applied via the type-model:
RuntimeTypeModel.Default.Add(typeof (Foo), true).SetFactory(factory);
where factory
is either:
static
method on Foo
(i.e. "CreateFoo"
) that returns a Foo
MethodInfo
of any static
method (does not need to be on Foo
) that returns a Foo
in either case, the method can use the same signatures as callbacks - so it can be parameterless, or can accept context information. For example:
public static Foo CreateFoo() {
return GetFromYourOwnMicroPool();
}
Note that in this usage, it is expected that the factory will reset the object to a vanilla state; protobuf-net will not attempt to do this. Note also that currently protobuf-net doesn't expose it's micro-pool as a reusable component, but you could re-use the source easily enough.
This functionality was specifically added to support a user with very high throughput who wanted to remove even the slightest GC overheads (based on lots of measurements... they sent me pretty graphs and everything ;p)
Additionally: with the exception of the root object, protobuf-net supports struct
values without boxing; so if you have a complex/nested object model, another option in extreme cases is to look at struct
s.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With