I have this WPF app that continuously displays live images from a 5MP digital camera. Images are converted from byte array to bitmapsource first and then displayed in a Image control:
m_imageControl.Source = MyBitMapSource;
While streaming, CPU utilization was about 30-35%. "RenderCapability.Tier >> 16" indicated "Tier 2" so I assume my system should be adequate and WPF will try to utilize hardware rendering as much as possible.
Interestingly, if software rendering was forced by using:
RenderOptions.ProcessRenderMode = System.Windows.Interop.RenderMode.SoftwareOnly;
CPU utilization fell to about 25% and app reported same displayed frame rate (no performance hit).
I am puzzled by this result as I thought hardware render should give better performance (less CPU) than software rendering. Could some one shine some light on this?
My System set-up are:
Answer from here
Images are converted from byte array to bitmapsource first and then displayed in a Image control:"
When you use hardware rendering, this is forcing it to serialize the new image to your GPU every frame. Normally, the GPU can cache the imagery, so the serialization overhead is smaller than the rendering overhead -but in your case, you're throwing away the image on the GPU constantly, so there's no advantage to pushing it to the GPU and rendering in hardware.
By forcing software rendering, you avoid the push to the GPU every frame, which is why it's actually performing better in this case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With