I have a GPUImageColorDodgeBlend
filter with two inputs connected:
GPUImageVideoCamera
which is getting frames from the iPhone video camera.GPUImageMovie
which is an (MP4) video file that I want to have laid over the live camera feed.The GPUImageColorDodgeBlend
is then connected to two outputs:
GPUImageImageView
to provide a live preview of the blend in action.GPUImageMovieWriter
to write the movie to storage once a record button is pressed.Now, before the video starts recording, everything works OK 100% of the time. The GPUImageVideo
is blended over the live camera video fine, and no issues or warnings are reported.
However, when the GPUImageMovieWriter
starts recording, things start to go wrong randomly. About 80-90% of the time, the GPUImageMovieWriter
works perfectly, there are no errors or warnings and the output video is written correctly.
However, about 10-20% of the time (and from what I can see, this is fairly random), things seem to go wrong during the recording process (although the on-screen preview continues to work fine).
Specifically, I start getting hundreds & hundreds of Program appending pixel buffer at time:
errors.
This error originates from the - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex
method in GPUImageWriter
.
This issue is triggered by problems with the frameTime
values that are reported to this method.
From what I can see, the problem is caused by the writer sometimes receiving frames numbered by the video camera (which tend to have extremely high time values like 64616612394291 with a timescale of 1000000000). But, then sometimes the writer gets frames numbered by the GPUImageMovie
which are numbered much lower (like 200200 with a timescale of 30000).
It seems that GPUImageWriter
is happy as long as the frame values are increasing, but once the frame value decreases, it stops writing and just emits Program appending pixel buffer at time:
errors.
I seem to be doing something fairly common, and this hasn't been reported anywhere as a bug, so my questions are (answers to any or all of these are appreciated -- they don't all need to necessarily be answered sequentially as separate questions):
Where do the frameTime
values come from -- why does it seem so arbitrary whether the frameTime
is numbered according to the GPUImageVideoCamera
source or the GPUImageMovie
source? Why does it alternative between each -- shouldn't the frame numbering scheme be uniform across all frames?
Am I correct in thinking that this issue is caused by non-increasing frameTime
s?
...if so, why does GPUImageView
accept and display the frameTime
s just fine on the screen 100% of the time, yet GPUImageMovieWriter
requires them to be ordered?
...and if so, how can I ensure that the frameTime
s that come in are valid? I tried adding if (frameTime.value < previousFrameTime.value) return;
to skip any lesser-numbered frames which works -- most of the time. Unfortunately, when I set playsAtActualSpeed
on the GPUImageMovie
this tends to become far less effective as all the frames end up getting skipped after a certain point.
...or perhaps this is a bug, in which case I'll need to report it on GitHub -- but I'd be interested to know if there's something I've overlooked here in how the frameTime
s work.
I've found a potential solution to this issue, which I've implemented as a hack for now, but could conceivably be extended to a proper solution.
I've traced the source of the timing back to GPUImageTwoInputFilter
which essentially multiplexes the two input sources into a single output of frames.
In the method - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex
, the filter waits until it has collected a frame from the first source (textureInput == 0
) and the second, and then forwards on these frames to its targets.
The problem (the way I see it) is that the method simply uses the frameTime
of whichever frame comes in second (excluding the cases of still images for which CMTIME_IS_INDEFINTE(frameTime) == YES
which I'm not considering for now because I don't work with still images) which may not always be the same frame (for whatever reason).
The relevant code which checks for both frames and sends them on for processing is as follows:
if ((hasReceivedFirstFrame && hasReceivedSecondFrame) || updatedMovieFrameOppositeStillImage)
{
[super newFrameReadyAtTime:frameTime atIndex:0]; // this line has the problem
hasReceivedFirstFrame = NO;
hasReceivedSecondFrame = NO;
}
What I've done is adjusted the above code to [super newFrameReadyAtTime:firstFrameTime atIndex:0]
so that it always uses the frameTime
from the first input and totally ignores the frameTime
from the second input. So far, it's all working fine like this. (Would still be interested for someone to let me know why this is written this way, given that GPUImageMovieWriter
seems to insist on increasing frameTime
s, which the method as-is doesn't guarantee.)
Caveat: This will almost certainly break entirely if you work only with still images, in which case you will have CMTIME_IS_INDEFINITE(frameTime) == YES
for your first input'sframeTime
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With