Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Core Audio AudioBuffer mData

I'm trying to learn about manipulating audio on iOS and I have been reading a lot on Apple's developer pages. However, I have reached a point at which I have an AudioBuffer and I'm not sure what to do with it. I know it contains an mData buffer but I have no idea what that contains. I have looked around for what a "buffer of audio data," as the reference describes it, but I still do not seem to understand what that is.

Also, the mData seems to be of type void which I gather may be cast as the type for the specific audio contained. I guess I'm not certain how to know what to cast this as either.

like image 200
Mason Avatar asked Sep 04 '11 16:09

Mason


3 Answers

You do not need to cast it. It is fine as (void *). It contains samples of 8.24bit fixed point integer values. I know that may be daunting at first. Other properties of the AudioBuffer describe if there are more then one channels. If so they are interleaved.

What you can do with it, is to write a render callback function as described in the Audio Unit Hosting Guide, and start feeding frames from you buffer into the output, hence achieve audio playback. The powerful thing is that you can manipulate the buffer data before sending it to the output, thus achieve special effects, like playback rate variation, pitch shifting, delay, echo, and so on.

Output in the render function in a simple case is something like this:

 OSStatus renderInput(void *inRefCon,
    AudioUnitRenderActionFlags *ioActionFlags,
    const AudioTimeStamp *inTimeStamp,
    UInt32 inBusNumber,
    UInt32 inNumberFrames,
    AudioBufferList *ioData)
{
    float *outA = (float*)ioData->mBuffers[0].mData;
    float *outB = (float*)ioData->mBuffers[1].mData;

    for (int i=0; i<inNumberFrames; i++) {
       outA[i] = myBuffer.mData[i];
       outB[i] = myBuffer.mData[i];
    }
}

This is not necessarily a working code example, I just wrote it from the top of my head now. But it conveys the basic idea.

like image 178
rage Avatar answered Nov 13 '22 05:11

rage


If you're serious about learning Core Audio, do yourself a favour and get this book. It got me started, and Core Audio is not easy by all means! http://www.amazon.com/Learning-Core-Audio-Hands-Programming/dp/0321636848

Pier.

like image 26
lppier Avatar answered Nov 13 '22 03:11

lppier


OSStatus callback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames,
AudioBufferList *ioData);

The ioData contains the data you need.

You can extract from there the buffers, ie:

AudioBuffer buffer = ioData->mBuffer[index];

Depending on the number of channel, it would be the number of mBuffers. For a Mono:

AudioBuffer buffer = ioData->mBuffer[0];

Then from buffer, you can extract the "real" audio data:

buffer.mData

And the size of the audio required:

buffer.mDataByteSize

The format of the data will depend on your Audio configuration. It can be casted, but it will also work as void, it will depend on what you want to do with it.

like image 1
EhTd Avatar answered Nov 13 '22 04:11

EhTd