After implementing the solution to encoding video (with audio) in this question, Video Encoding using AVAssetWriter - CRASHES, I found that the code works correctly in the iPhone Simulator. Unfortunately, certain videos fail to encode their audio while running on an actual iPhone 5 (and other devices).
For example, videos generated from the WWDC 2011 sample code RosyWriter (https://developer.apple.com/library/IOS/samplecode/RosyWriter/Introduction/Intro.html) do not completely encode because the function -[AVAssetReaderOutput copyNextSampleBuffer]
never returns.
The video buffers come in correctly, but as soon as it tries to copy the first audio CMSampleBufferRef, the call hangs. When I try this on videos that come from other sources, like those recorded in the native iOS Camera app, the audio imports correctly.
This thread, https://groups.google.com/forum/#!topic/coreaudio-api/F4cqCu99nUI, makes note of the copyNextSampleBuffer
function hanging when being used with AudioQueues, and suggests keeping the operations on a single thread. I've tried keeping everything on a separate thread, on the main thread, but had no luck.
Did anyone else experience this and have a possible solution?
EDIT: It appears that videos generated from RosyWriter have their tracks reversed relative to videos from the native Camera app, i.e. audio stream as stream 0, and video stream as stream 1.
Stream #0:0(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 60 kb/s
Metadata:
creation_time : 2013-10-28 16:13:05
handler_name : Core Media Data Handler
Stream #0:1(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1920x1080, 8716 kb/s, 28.99 fps, 29.97 tbr, 600 tbn, 1200 tbc
Metadata:
rotate : 90
creation_time : 2013-10-28 16:13:05
handler_name : Core Media Data Handler
Not sure if this makes a difference to the AVAssetReader.
I was still experiencing this issue on iOS 9.3.2, and the thing that resolved it was making sure that the AVAssetReaderAudioMixOutput*
was initially set with options rather than nil
when calling -[AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks]
.
Example:
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
// create an AVAssetReaderOutput for the audio tracks
NSArray* audioTracks = [asset tracksWithMediaType:AVMediaTypeAudio];
AVAssetReaderAudioMixOutput* _audioReaderOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:outputSettings];
This prevented later calls to -[AVAssetReaderOutput copyNextSampleBuffer]
from hanging when they otherwise were doing so.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With