While there are plenty of tutorials for how to use AVCaptureSession to grab camera data, I can find no information (even on apple's dev network itself) on how to properly handle microphone data.
I have implemented AVCaptureAudioDataOutputSampleBufferDelegate, and I'm getting calls to my delegate, but I have no idea how the contents of the CMSampleBufferRef I get are formatted. Are the contents of the buffer one discrete sample? What are its properties? Where can these properties be set?
Video properties can be set using [AVCaptureVideoDataOutput setVideoSettings:], but there is no corresponding call for AVCaptureAudioDataOutput (no setAudioSettings or anything similar).
They are formatted as LPCM! You can verify this by getting the AudioStreamBasicDescription like so:
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);
and then checking the stream description’s mFormatId.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With