Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Experiment on displaying a Bitmap retrieved from a camera on a Picturebox

In my code I retrieve frames from a camera with a pointer to an unmanaged object, make some calculations on it and then I make it visualized on a picturebox control.
Before I go further in this application with all the details, I want to be sure that the base code for this process is good. In particular I would like to:
- keep execution time minimal and avoid unnecessary operations, such as copying more images than necessary. I want to keep only essential operations
- understand if a delay in the calculation process on every frame could have detrimental effects on the way images are shown (i.e. if it is not printed what I expect) or some image is skipped
- prevent more serious errors, such as ones due to memory or thread management, or to image display.
For this purpose, I set up a few experimental lines of code (below), but I’m not able to explain the results of what I found. If you have the executables of OpenCv you can make a try by yourself.

using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Threading;

public partial class FormX : Form
{
private delegate void setImageCallback();
Bitmap _bmp;
Bitmap _bmp_draw;
bool _exit;
double _x;
IntPtr _ImgBuffer;

bool buffercopy;
bool copyBitmap;
bool refresh;

public FormX()
{
    InitializeComponent();
    _x = 10.1;

    // set experimemental parameters
    buffercopy = false;
    copyBitmap = false;
    refresh = true;
}

private void buttonStart_Click(object sender, EventArgs e)
{
    Thread camThread = new Thread(new ThreadStart(Cycle));
    camThread.Start();
}

private void buttonStop_Click(object sender, EventArgs e)
{
    _exit = true;
}

private void Cycle()
{
    _ImgBuffer = IntPtr.Zero;
    _exit = false;

    IntPtr vcap = cvCreateCameraCapture(0);
    while (!_exit)
    {
        IntPtr frame = cvQueryFrame(vcap);

        if (buffercopy)
        {
            UnmanageCopy(frame);
            _bmp = SharedBitmap(_ImgBuffer);
        }
        else
        { _bmp = SharedBitmap(frame); }

        // make calculations
        int N = 1000000; /*1000000*/
        for (int i = 0; i < N; i++)
            _x = Math.Sin(0.999999 * _x);

        ShowFrame();
    }

    cvReleaseImage(ref _ImgBuffer);
    cvReleaseCapture(ref vcap);
}


private void ShowFrame()
{
    if (pbCam.InvokeRequired)
    {
        this.Invoke(new setImageCallback(ShowFrame));
    }
    else
    {
        Pen RectangleDtPen = new Pen(Color.Azure, 3);

        if (copyBitmap)
        {
            if (_bmp_draw != null) _bmp_draw.Dispose();
            //_bmp_draw = new Bitmap(_bmp); // deep copy
            _bmp_draw = _bmp.Clone(new Rectangle(0, 0, _bmp.Width, _bmp.Height), _bmp.PixelFormat);
        }
        else
        {
            _bmp_draw = _bmp;  // add reference to the same object
        }

        Graphics g = Graphics.FromImage(_bmp_draw);
        String drawString = _x.ToString();
        Font drawFont = new Font("Arial", 56);
        SolidBrush drawBrush = new SolidBrush(Color.Red);
        PointF drawPoint = new PointF(10.0F, 10.0F);
        g.DrawString(drawString, drawFont, drawBrush, drawPoint);
        drawPoint = new PointF(10.0F, 300.0F);
        g.DrawString(drawString, drawFont, drawBrush, drawPoint);
        g.DrawRectangle(RectangleDtPen, 12, 12, 200, 400);
        g.Dispose();

        pbCam.Image = _bmp_draw;
        if (refresh) pbCam.Refresh();
    }
}

public void UnmanageCopy(IntPtr f)
{
    if (_ImgBuffer == IntPtr.Zero)
        _ImgBuffer = cvCloneImage(f);
    else
        cvCopy(f, _ImgBuffer, IntPtr.Zero);
}

// only works with 3 channel images from camera! (to keep code minimal)
public Bitmap SharedBitmap(IntPtr ipl)
{
    // gets unmanaged data from pointer to IplImage:
    IntPtr scan0;
    int step;
    Size size;
    OpenCvCall.cvGetRawData(ipl, out scan0, out step, out size);
    return new Bitmap(size.Width, size.Height, step, PixelFormat.Format24bppRgb, scan0);
}

// based on older version of OpenCv. Change dll name if different
[DllImport( "opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvCreateCameraCapture(int index);

[DllImport("opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvReleaseCapture(ref IntPtr capture);

[DllImport("opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvQueryFrame(IntPtr capture);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvGetRawData(IntPtr arr, out IntPtr data, out int step, out Size roiSize);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvCopy(IntPtr src, IntPtr dst, IntPtr mask);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvCloneImage(IntPtr src);

[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvReleaseImage(ref IntPtr image);
}

results [dual core 2 Duo T6600 2.2 GHz]:

A. buffercopy = false; copyBitmap = false; refresh = false;
This is the simpler configuration. Each frame is retrieved in turn, operations are made (in the reality they are based on the same frame, here just calculations), then the result of the calculations is printed on top of the image and finally it is displayed on a picturebox.
OpenCv documentation says:

OpenCV 1.x functions cvRetrieveFrame and cv.RetrieveFrame return image stored inside the video capturing structure. It is not allowed to modify or release the image! You can copy the frame using cvCloneImage() and then do whatever you want with the copy.

But this doesn’t prevent us from doing experiments.
If the calculation are not intense (low number of iterations, N), everything is just ok and the fact that we manipulate the image buffer own by the unmanaged frame retriever doesn’t pose a problem here.
The reason is that probably they advise to leave untouched the buffer, in case people would modify its structure (not its values) or do operations asynchronously without realizing it. Now we retrieve frames and modify their content in turn.
If N is increased (N=1000000 or more), when the number of frames per second is not high, for example with artificial light and low exposure, everything seems ok, but after a while the video is lagged and the graphics impressed on it are blinking. With a higher frame rate the blinking appears from the beginning, even when the video is still fluid.
Is this because the mechanism of displaying images on the control (or refreshing or whatever else) is somehow asynchronous and when the picturebox is fetching its buffer of data it is modified in the meanwhile by the camera, deleting the graphics?
Or is there some other reason?
Why is the image lagged in that way, i.e. I would expect that the delay due to calculations only had the effect of skipping the frames received by the camera when the calculation are not done yet, and de facto only reducing the frame rate; or alternatively that all frames are received and the delay due to calculations brings the system to process images gotten minutes before, because the queue of images to process rises over time.
Instead, the observed behavior seems hybrid between the two: there is a delay of a few seconds, but this seems not increased much as the capturing process goes on.

B. buffercopy = true; copyBitmap = false; refresh = false;
Here I make a deep copy of the buffer into a second buffer, following the advice of the OpenCv documentation.
Nothing changes. The second buffer doesn’t change its address in memory during the run.

C. buffercopy = false; copyBitmap = true; refresh = false;
Now the (deep) copy of the bitmap is made allocating every time a new space in memory.
The blinking effect has gone, but the lagging keep arising after a certain time.

D. buffercopy = false; copyBitmap = false; refresh = true;
As before.

Please help me explain these results!

like image 379
Giuseppe Dini Avatar asked Apr 26 '15 17:04

Giuseppe Dini


1 Answers

If I may be so frank, it is a bit tedious to understand all the details of your questions, but let me make a few points to help you analyse your results.

In case A, you say you perform calculations directly on the buffer. The documentation says you shouldn't do this, so if you do, you can expect undefined results. OpenCV assumes you won't touch it, so it might do stuff like suddenly delete that part of memory, let some other app process it, etc. It might look like it works, but you can never know for sure, so don't do it *slaps your wrist* In particular, if your processing takes a long time, the camera might overwrite the buffer while you're in the middle of processing it.

The way you should do it is to copy the buffer before doing anything. This will give you a piece of memory that is yours to do with whatever you wish. You can create a Bitmap that refers to this memory, and manually free the memory when you no longer need it.

If your processing rate (frames processed per second) is less than the number of frames captured per second by the camera, you have to expect some frames will be dropped. If you want to show a live view of the processed images, it will lag and there's no simple way around it. If it is vital that your application processes a fluid video (e.g. this might be necessary if you're tracking an object), then consider storing the video to disk so you don't have to process in real-time. You can also consider multithreading to process several frames at once, but the live view would have a latency.

By the way, is there any particular reason why you're not using EmguCV? It has abstractions for the camera and a system that raises an event whenever the camera has captured a new frame. This way, you don't need to continuously call cvQueryFrame on a background thread.

like image 84
MariusUt Avatar answered Sep 19 '22 21:09

MariusUt