I am trying to simulate a long exposure photo by combining images(frames) into one image and by performing operations based on a preset alpha. I am doing this on an iPhone, and I currently have the length of the video set to 1 second(30 frames). The alpha is set to 1.0/frameCount
however I hard coded in 30 to represent one second of 30 FPS video capture. I stop the operations once it has reached one second of video/30 frames. The idea is the user can set a timer for x seconds and I will do the math to figure out how many frames to allow.
Here is the code I am using:
- (void)processImage:(Mat&)image
{
if (_isRecording) {
// first frame
double alpha = 1.0/30;
if (_frameCount == 0) {
_exposed = image;
_frameCount++;
} else {
Mat exposed = _exposed.clone();
addWeighted(exposed, alpha, image, 1.0 - alpha, 0.0, _exposed);
_frameCount++;
}
// stop and save image
if (_frameCount == 30) {
_isRecording = NO;
_frameCount = 0;
cvtColor(_exposed, _exposed, CV_BGRA2RGB, 30);
UIImage *exposed = [LEMatConverter UIImageFromCVMat:_exposed];
UIImageWriteToSavedPhotosAlbum(exposed, nil, nil, nil);
NSLog(@"saved");
}
}
}
When I run this code I basically get back a still image that looks as if it is a single frame. Here is an example:
Does anyone know how I can produce the desired effect of a long exposure image from video frames given I know how many frames there will be?
First of all, (probably this is not your case, as you pointed out that you are working on a video and not a camera) if you base your code on the value of the frame rate, be sure that 30fps is the effective value and not the maximum one. Sometimes cameras automatically adjust that number based on the amount of light they get from the environment. If it is dark, then the exposure time is increased and therefore the framerate is diminished.
Second point, it is really hard to simulate the real mechanism of photo exposure given a bunch of pixels.
Imagine you want to double the exposure time, this should be simulated by two consecutive frames.
In the real world doubling the exposure time means that the shutter speed is halved and so twice as much light hits the sensor or film, the result is a brighter image.
How do you simulate this? Consider for simplicity the case of two quite bright grayscale images you want to merge. If in a given point the pixel values are, say, 180 and 181 what is the resulting value? The first answer would be 180+181, but pixel intensities ranges between 0 and 255, so it has to be truncated at 255.
The real camera with increased exposure probably would behave differently, not reaching the maximum value.
Now I’ll consider you code.
The first time you process an image (i.e. run the function), you simply store the frame in variable _exposed.
The second time you blend 29/30 of the new frame and 1/30 of the previously stored image.
The third time 29/30 of the third frame with the result of previous operation. This results in placing a fading weight on the first frame which is virtually disappeared.
The last time you call the function, again, you sum up 29/30 of the last frame and 1/30 of the previous result. In turn this means that the effect of the first frames is virtually disappeared and even the previous one counts only for a share of 29/(30x30).
The image you get is just the last frame with a slight blur coming from the previous frames.
How do you obtain a simulation of exposure?
If you simply want to average 30 frames you have to replace these lines:
if (_frameCount == 0) {
_exposed = image.clone();
addWeighted(_exposed, 0.0, image, alpha, 0.0, _exposed);
} else {
addWeighted(_exposed, 1.0, image, alpha, 0.0, _exposed);
}
_frameCount++;
If you also want to make the image brighter to some extent, you could simulate it via a multiplication factor:
if (_frameCount == 0) {
_exposed = image.clone();
addWeighted(_exposed, 0.0, image, alpha*brightfactor, 0.0, _exposed);
} else {
addWeighted(_exposed, 1.0, image, alpha*brightfactor, 0.0, _exposed);
}
_frameCount++;
Tune brightfactor to a value it best simulate a real increasing in exposure time. (EDIT: a value between 1.5 and 2.5 should do the job)
In my opinion using alpha is not the correct way.
You should accumulate the (absolute) differences from the exposure frame:
if (_frameCount == 0) {
_exposed = image.clone();
} else {
_exposed += image - _exposed;
}
Following approach should work in a case where
Suppose you obtained such a background and can get a foreground mask for each frame that you capture after the background-learning stage. Let's denote
Then update the background for each frame as
I_t.copyTo(bg, fgmask_t)
where copyTo is a method of OpenCV Mat class.
So the procedure would be
Learn bg
for each frame I_t
{
get fgmask_t
I_t.copyTo(bg, fgmask_t)
}
When frame capture is over, bg will contain the history of motion.
You can use a Gaussian Mixture Model (BackgroundSubtractorMOG variants in OpenCV) or a simple frame differencing technique for this. The quality will depend on how well the technique segments the motion (or the quality of the foreground mask).
I think this should work well for a stationary camera, but if the camera moves, it may not work very well except in a situation where the camera tracks an object.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With