I'm finding an example of simple play-thru application using built-in mic/speaker with kAudioUnitSubType_VoiceProcessingIO subtype(not kAudioUnitSubType_HALOutput) in macosx. The comments on the core audio api says that kAudioUnitSubType_VoiceProcessingIO is available on the desktop and with iPhone 3.0 or greater, so I think that there must be an example somewhere for macos.
Do you have any idea where the sample is? or Is there anyone who know how to use the kAudioUnitSubType_VoiceProcessingIO subtype in macos? I already tried the same way that I did in iOS, but it didn't work.
I discovered a few things enabling this IO unit.
As with other core audio work, you just need to check the error status of every single function call, determine what the errors are and make little changes at each step until you finally get it to work.
I had two different kAudioUnitProperty_StreamFormat setup based on the number of the channels.
size_t bytesPerSample = sizeof (AudioUnitSampleType);
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2;
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate = graphSampleRate;
and
size_t bytesPerSample = sizeof (AudioUnitSampleType);
monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
monoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
monoStreamFormat.mBytesPerPacket = bytesPerSample;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mBytesPerFrame = bytesPerSample;
monoStreamFormat.mChannelsPerFrame = 1; // 1 indicates mono
monoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
monoStreamFormat.mSampleRate = graphSampleRate;
with this audio stream formats when using the I/O unit as a kAudioUnitSubType_VoiceProcessingIO
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
I can clearly see a interruption in the audio output, as the buffer size was smaller than the one from this AudioUnit.
Switching back to the kAudioUnitSubType_RemoteIO
iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
That interruption disappear.
I'm processing audio input from microphone and applying some real time calculations on the audio buffers.
In the methods the graphSampleRate is the AVSession sample rate
graphSampleRate = [AVAudioSession sharedInstance] sampleRate];
and maybe here I'm wrong.
At the end the configuration parameters values are the following:
The stereo stream format:
Sample Rate: 44100
Format ID: lpcm
Format Flags: 3116
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 32
The mono stream format:
Sample Rate: 44100
Format ID: lpcm
Format Flags: 3116
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 1
Bits per Channel: 32
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With