We have a .NET application which our customers consider too large for mass deployment and we would like to understand what contributes to our memory footprint and is it possible to do any better without completely abandoning .NET and wpf.
We are interested in improving both Total Size and the Private Working Set (pws). In this question I just want to look at pws. VMMap typically reports a pws of 105 mb. Of this 11mb is image, 31mb is heap, 52 mb is managed heap, 7 mb is private data and the rest is stack, page table etc.
The largest prize here is the managed heap. We can account for approx 8mb of the manged heap directly within our own code, i.e. objects and windows we create and manage. The rest is presumable .NET objects created by the elements of the framework that we use.
What we would like to do is identify what element of the framework account for what portion of this usage and potentially re architect our system to avoid their use where possible. Can anyone suggest how this investigation can be done?
Further clarification:
I have used a number of tools so far, including the excellent ANTS profilers and WinDbg with SOS, and they do allow me to see the objects in the managed heap, but of real interest here is not 'What?', but 'Why?' Ideally I would like to be able to say, "Well, there a 10mb of objects been created here because we use WCF. If we write our own native transport we could save 8mb of that with x quality risk and y development effort."
Doing a gcroot on 300,000+ objects is not possible.
The new tool is PerfView which can show reference tree and also do diffing
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With