Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

iOS - Generate and play indefinite, simple audio (sine wave)

I'm looking to build an incredibly simple application for iOS with a button that starts and stops an audio signal. The signal is just going to be a sine wave, and it's going to check my model (an instance variable for the volume) throughout its playback and change its volume accordingly.

My difficulty has to do with the indefinite nature of the task. I understand how to build tables, fill them with data, respond to button presses, and so on; however, when it comes to just having something continue on indefinitely (in this case, a sound), I'm a little stuck! Any pointers would be terrific!

Thanks for reading.

like image 958
Rogare Avatar asked Jan 22 '13 19:01

Rogare


1 Answers

Here's a bare-bones application which will play a generated frequency on-demand. You haven't specified whether to do iOS or OSX, so I've gone for OSX since it's slightly simpler (no messing with Audio Session Categories). If you need iOS, you'll be able to find out the missing bits by looking into Audio Session Category basics and swapping the Default Output audio unit for the RemoteIO audio unit.

Note that the intention of this is purely to demonstrate some Core Audio / Audio Unit basics. You'll probably want to look into the AUGraph API if you want to start getting more complex than this (also in the interest of providing a clean example, I'm not doing any error checking. Always do error checking when dealing with Core Audio).

You'll need to add the AudioToolbox and AudioUnit frameworks to your project to use this code.

#import <AudioToolbox/AudioToolbox.h>

@interface SWAppDelegate : NSObject <NSApplicationDelegate>
{
    AudioUnit outputUnit;
    double renderPhase;
}
@end

@implementation SWAppDelegate

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
//  First, we need to establish which Audio Unit we want.

//  We start with its description, which is:
    AudioComponentDescription outputUnitDescription = {
        .componentType         = kAudioUnitType_Output,
        .componentSubType      = kAudioUnitSubType_DefaultOutput,
        .componentManufacturer = kAudioUnitManufacturer_Apple
    };

//  Next, we get the first (and only) component corresponding to that description
    AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription);

//  Now we can create an instance of that component, which will create an
//  instance of the Audio Unit we're looking for (the default output)
    AudioComponentInstanceNew(outputComponent, &outputUnit);
    AudioUnitInitialize(outputUnit);

//  Next we'll tell the output unit what format our generated audio will
//  be in. Generally speaking, you'll want to stick to sane formats, since
//  the output unit won't accept every single possible stream format.
//  Here, we're specifying floating point samples with a sample rate of
//  44100 Hz in mono (i.e. 1 channel)
    AudioStreamBasicDescription ASBD = {
        .mSampleRate       = 44100,
        .mFormatID         = kAudioFormatLinearPCM,
        .mFormatFlags      = kAudioFormatFlagsNativeFloatPacked,
        .mChannelsPerFrame = 1,
        .mFramesPerPacket  = 1,
        .mBitsPerChannel   = sizeof(Float32) * 8,
        .mBytesPerPacket   = sizeof(Float32),
        .mBytesPerFrame    = sizeof(Float32)
    };

    AudioUnitSetProperty(outputUnit,
                         kAudioUnitProperty_StreamFormat,
                         kAudioUnitScope_Input,
                         0,
                         &ASBD,
                         sizeof(ASBD));

//  Next step is to tell our output unit which function we'd like it
//  to call to get audio samples. We'll also pass in a context pointer,
//  which can be a pointer to anything you need to maintain state between
//  render callbacks. We only need to point to a double which represents
//  the current phase of the sine wave we're creating.
    AURenderCallbackStruct callbackInfo = {
        .inputProc       = SineWaveRenderCallback,
        .inputProcRefCon = &renderPhase
    };

    AudioUnitSetProperty(outputUnit,
                         kAudioUnitProperty_SetRenderCallback,
                         kAudioUnitScope_Global,
                         0,
                         &callbackInfo,
                         sizeof(callbackInfo));

//  Here we're telling the output unit to start requesting audio samples
//  from our render callback. This is the line of code that starts actually
//  sending audio to your speakers.
    AudioOutputUnitStart(outputUnit);
}

// This is our render callback. It will be called very frequently for short
// buffers of audio (512 samples per call on my machine).
OSStatus SineWaveRenderCallback(void * inRefCon,
                                AudioUnitRenderActionFlags * ioActionFlags,
                                const AudioTimeStamp * inTimeStamp,
                                UInt32 inBusNumber,
                                UInt32 inNumberFrames,
                                AudioBufferList * ioData)
{
    // inRefCon is the context pointer we passed in earlier when setting the render callback
    double currentPhase = *((double *)inRefCon);
    // ioData is where we're supposed to put the audio samples we've created
    Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData;
    const double frequency = 440.;
    const double phaseStep = (frequency / 44100.) * (M_PI * 2.);

    for(int i = 0; i < inNumberFrames; i++) {
        outputBuffer[i] = sin(currentPhase);
        currentPhase += phaseStep;
    }

    // If we were doing stereo (or more), this would copy our sine wave samples
    // to all of the remaining channels
    for(int i = 1; i < ioData->mNumberBuffers; i++) {
        memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize);
    }

    // writing the current phase back to inRefCon so we can use it on the next call
    *((double *)inRefCon) = currentPhase;
    return noErr;
}

- (void)applicationWillTerminate:(NSNotification *)notification
{
    AudioOutputUnitStop(outputUnit);
    AudioUnitUninitialize(outputUnit);
    AudioComponentInstanceDispose(outputUnit);
}

@end

You can call AudioOutputUnitStart() and AudioOutputUnitStop() at will to start/stop producing audio. If you want to dynamically change the frequency, you can pass in a pointer to a struct containing both the renderPhase double and another one representing the frequency you want.

Be careful in the render callback. It's called from a realtime thread (not from the same thread as your main run loop). Render callbacks are subject to some fairly strict time requirements, which means that there's many things you Should Not Do in your callback, such as:

  • Allocate memory
  • Wait on a mutex
  • Read from a file on disk
  • Objective-C messaging (Yes, seriously.)

Note that this is not the only way to do this. I've only demonstrated it this way since you've tagged this core-audio. If you don't need to change the frequency you can just use the AVAudioPlayer with a pre-made sound file containing your sine wave.

There's also Novocaine, which hides a lot of this verbosity from you. You could also look into the Audio Queue API, which works fairly similar to the Core Audio sample I wrote but decouples you from the hardware a little more (i.e. it's less strict about how you behave in your render callback).

like image 189
admsyn Avatar answered Nov 05 '22 09:11

admsyn