I'm trying to record sound produced by a mixer unit output.
For the moment, my code is based on the apple MixerHost iOS app demo : A mixer node is connected to a remote IO node on the audio graphe.
And i try to set an input callback on the remote IO node input on the mixer output.
I do something wrong but I can not find the error.
Here is the code below. This is done just after the Multichannel Mixer unit Setup :
UInt32 flag = 1;
// Enable IO for playback
result = AudioUnitSetProperty(iOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, // Output bus
&flag,
sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty EnableIO" withStatus: result]; return;}
/* can't do that because *** AudioUnitSetProperty EnableIO error: -1073752493 00000000
result = AudioUnitSetProperty(iOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
0, // Output bus
&flag,
sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty EnableIO" withStatus: result]; return;}
*/
Then create a stream format :
// I/O stream format
iOStreamFormat.mSampleRate = 44100.0;
iOStreamFormat.mFormatID = kAudioFormatLinearPCM;
iOStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iOStreamFormat.mFramesPerPacket = 1;
iOStreamFormat.mChannelsPerFrame = 1;
iOStreamFormat.mBitsPerChannel = 16;
iOStreamFormat.mBytesPerPacket = 2;
iOStreamFormat.mBytesPerFrame = 2;
[self printASBD: iOStreamFormat];
Then affect the format and specify sample rate :
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output,
1, // Input bus
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty StreamFormat" withStatus: result]; return;}
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,
0, // Output bus
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty StreamFormat" withStatus: result]; return;}
// SampleRate I/O
result = AudioUnitSetProperty (iOUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Input,
0, // Output
&graphSampleRate,
sizeof (graphSampleRate));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty (set I/O unit input stream format)" withStatus: result]; return;}
Then, i try to set the render callback.
Solution 1 >>> my recording callback is never called
effectState.rioUnit = iOUnit;
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &recordingCallback;
renderCallbackStruct.inputProcRefCon = &effectState;
result = AudioUnitSetProperty (iOUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input,
0, // Output bus
&renderCallbackStruct,
sizeof (renderCallbackStruct));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty SetRenderCallback" withStatus: result]; return;}
Solution 2 >>> my app crashes at launch on this
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &recordingCallback;
renderCallbackStruct.inputProcRefCon = &effectState;
result = AUGraphSetNodeInputCallback (processingGraph, iONode,
0, // Output bus
&renderCallbackStruct);
if (noErr != result) {[self printErrorMessage: @"AUGraphSetNodeInputCallback (I/O unit input callback bus 0)" withStatus: result]; return;}
If anyone have an idea ...
EDIT Solution 3 (thanks to arlo anwser) >> There is now a format problem
AudioStreamBasicDescription dstFormat = {0};
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM;
dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output,
1,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,
0,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
AudioUnitAddRenderNotify(
iOUnit,
&recordingCallback,
&effectState
);
and the file setup :
if (noErr != result) {[self printErrorMessage: @"AUGraphInitialize" withStatus: result]; return;}
// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[[NSString alloc] initWithFormat: @"%@/output.caf", documentsDirectory] autorelease];
NSLog(@">>> %@", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &effectState.audioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, @"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(effectState.audioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);
NSAssert(setupErr == noErr, @"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(effectState.audioFileRef, 0, NULL);
NSAssert(setupErr == noErr, @"Couldn't initialize write buffers for audio file");
And the recording callback :
static OSStatus recordingCallback (void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {
if (*ioActionFlags == kAudioUnitRenderAction_PostRender && inBusNumber == 0)
{
EffectState *effectState = (EffectState *)inRefCon;
ExtAudioFileWriteAsync(effectState->audioFileRef, inNumberFrames, ioData);
}
return noErr;
}
There is something missing in the output file output.caf :). I'm totally lost in formats to apply.
I don't think you need to enable input on the I/O unit. I would also comment out the format and sample rate configuration that you're doing on the I/O unit until you get your callback running, because a mismatched or unsupported format can prevent the audio units from being linked together.
To add the callback, try this method:
AudioUnitAddRenderNotify(
iOUnit,
&recordingCallback,
self
);
Apparently the other methods will replace the node connection, but this method will not -- so your audio units can stay connected even though you've added a callback.
Once your callback is running, if you find that there's no data in the buffers (ioData), wrap this code around your callback code:
if (*ioActionFlags == kAudioUnitRenderAction_PostRender) {
// your code
}
This is needed because a callback added in this way runs both before and after the audio unit renders its audio, but you just want to run your code after it renders.
Once the callback is running, the next step is to figure out what audio format it's receiving and handle it appropriately. Try adding this to your callback:
SInt16 *dataLeftChannel = (SInt16 *)ioData->mBuffers[0].mData;
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) {
NSLog(@"sample %lu: %d", frameNumber, dataLeftChannel[frameNumber]);
}
This will slow your app so much that it will probably prevent any audio from actually playing, but you should be able to run it long enough to see what the samples look like. If the callback is receiving 16-bit audio, the samples should be positive or negative integers between -32000 and 32000. If the samples alternate between a normal-looking number and a much smaller number, try this code in your callback instead:
SInt32 *dataLeftChannel = (SInt32 *)ioData->mBuffers[0].mData;
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) {
NSLog(@"sample %lu: %ld", frameNumber, dataLeftChannel[frameNumber]);
}
This should show you the complete 8.24 samples.
If you can save the data in the format the callback is receiving, then you should have what you need. If you need to save it in a different format, you should be able to convert the format in the Remote I/O audio unit ... but I haven't been able to figure out how to do that when it's connected to a Multichannel Mixer unit. As an alternative, you can convert the data using the Audio Converter Services. First, define the input and output formats:
AudioStreamBasicDescription monoCanonicalFormat;
size_t bytesPerSample = sizeof (AudioUnitSampleType);
monoCanonicalFormat.mFormatID = kAudioFormatLinearPCM;
monoCanonicalFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
monoCanonicalFormat.mBytesPerPacket = bytesPerSample;
monoCanonicalFormat.mFramesPerPacket = 1;
monoCanonicalFormat.mBytesPerFrame = bytesPerSample;
monoCanonicalFormat.mChannelsPerFrame = 1;
monoCanonicalFormat.mBitsPerChannel = 8 * bytesPerSample;
monoCanonicalFormat.mSampleRate = graphSampleRate;
AudioStreamBasicDescription mono16Format;
bytesPerSample = sizeof (SInt16);
mono16Format.mFormatID = kAudioFormatLinearPCM;
mono16Format.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
mono16Format.mChannelsPerFrame = 1;
mono16Format.mSampleRate = graphSampleRate;
mono16Format.mBitsPerChannel = 16;
mono16Format.mFramesPerPacket = 1;
mono16Format.mBytesPerPacket = 2;
mono16Format.mBytesPerFrame = 2;
Then define a converter somewhere outside your callback, and create a temporary buffer for handling the data during conversion:
AudioConverterRef formatConverterCanonicalTo16;
@property AudioConverterRef formatConverterCanonicalTo16;
@synthesize AudioConverterRef;
AudioConverterNew(
&monoCanonicalFormat,
&mono16Format,
&formatConverterCanonicalTo16
);
SInt16 *data16;
@property (readwrite) SInt16 *data16;
@synthesize data16;
data16 = malloc(sizeof(SInt16) * 4096);
Then add this to your callback, before you save your data:
UInt32 dataSizeCanonical = ioData->mBuffers[0].mDataByteSize;
SInt32 *dataCanonical = (SInt32 *)ioData->mBuffers[0].mData;
UInt32 dataSize16 = dataSizeCanonical;
AudioConverterConvertBuffer(
effectState->formatConverterCanonicalTo16,
dataSizeCanonical,
dataCanonical,
&dataSize16,
effectState->data16
);
Then you can save data16, which is in 16-bit format and might be what you want saved in your file. It will be more compatible and half as large as the canonical data.
When you're done, you can clean up a couple things:
AudioConverterDispose(formatConverterCanonicalTo16);
free(data16);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With