Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Playing a sine wave tone in Cocoa

(Update: found the answer, see below.)

I'm trying to play a 1 kHz sine wave tone in an Objective-C Cocoa app; I've (tried to) translate a Swift example to Objective-C, but there must be a mistake somewhere, as the resulting tone is around 440 Hz instead of 1 kHz, and only on the left channel.

The code:

@property (nonatomic, strong) AVAudioEngine *audioEngine;
@property (nonatomic, strong) AVAudioPlayerNode *player;
@property (nonatomic, strong) AVAudioMixerNode *mixer;
@property (nonatomic, strong) AVAudioPCMBuffer *buffer;

// -----

self.audioEngine = [[AVAudioEngine alloc] init];
self.player = [[AVAudioPlayerNode alloc] init];
self.mixer = self.audioEngine.mainMixerNode;
self.buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:[self.player outputFormatForBus:0] frameCapacity:100];
self.buffer.frameLength = 100;

float amplitude = 0.4;
float frequency = 1000;
float sampleRate = [[self.mixer outputFormatForBus:0] sampleRate];
NSInteger channelCount = [[self.mixer outputFormatForBus:0] channelCount];


float *const *floatChannelData = self.buffer.floatChannelData;
float *p2 = *floatChannelData;

NSLog(@"Sine generator: sample rate = %.1f, %ld channels, frame length = %u.", sampleRate, (long)channelCount, self.buffer.frameLength);

for (int i = 0; i < self.buffer.frameLength ; i += channelCount) {

    // a = Amplitude
    // n = current sample
    // r = Sample rate (samples / sec.)
    //
    // f(n) = a * sin( theta(n) )
    // where theta(n) = 2 * M_PI * n / r

    float theta = 441.0f * i * 2.0 * M_PI / sampleRate;
    float value = sinf(theta);

    p2[i] = value * amplitude;
}


[self.audioEngine attachNode:self.player];
[self.audioEngine connect:self.player to:self.mixer format:[self.player outputFormatForBus:0]];
[self.audioEngine startAndReturnError:nil];

[self.player play];
[self.player scheduleBuffer:self.buffer atTime:nil options:AVAudioPlayerNodeBufferLoops completionHandler:nil];

I suspect that there is either a math error in the float theta=... line, or I'm making a mistake with the floatChannelData buffer. The original Swift line reads:

buffer.floatChannelData.memory[i] = val * 0.5

Not sure what to make of the float *const * type of floatChannelData exactly. My understanding is that this is a pointer to 2 x float * const arrays. (2 because of the number of channels, left/right.)

The source of the Swift code is here: http://www.tmroyal.com/playing-sounds-in-swift-audioengine.html

It would be really nice if somebody could explain the buffer structure to me.

Found the solution

The problem was two-fold. First, the value 441.0 did indeed control the frequency. But changing that alone did not solve the problem; the resulting tone was more sawtooth-like than sine, and found out why.

With the factor 441 and a sample rate of 44.1 kHz, the ratio of those value was 1:100 - exactly the number of samples in the buffer. Changing 441 to a value that is not a whole multiple of that results in an "incomplete" sine wave: the value in the last sample frame (#100) is not zero, which causes a sharp drop-off when the loop starts again - and that sounds like a sawtooth wave.

I had to change the frame buffer length to be exactly (or a multiple of) the frequency-to-sample-rate ratio, so that the last sample value was (close to) zero.

The updated code:

self.audioEngine = [[AVAudioEngine alloc] init];
self.player = [[AVAudioPlayerNode alloc] init];
self.mixer = self.audioEngine.mainMixerNode;

float sampleRate = [[self.mixer outputFormatForBus:0] sampleRate];
AVAudioFrameCount frameBufferLength = floor(sampleRate / self.frequency) * 1;

self.buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:[self.player outputFormatForBus:0] frameCapacity:frameBufferLength];
self.buffer.frameLength = frameBufferLength;

NSInteger channelCount = [[self.mixer outputFormatForBus:0] channelCount];

float *const *floatChannelData = self.buffer.floatChannelData;    

NSLog(@"Sine generator: sample rate = %.1f, %ld channels, frame length = %u.", sampleRate, (long)channelCount, self.buffer.frameLength);

for (int i = 0; i < self.buffer.frameLength ; i ++) {
    float theta = self.frequency * i * 2.0 * M_PI / sampleRate;
    float value = sinf(theta);
    for (int channelNumber = 0; channelNumber < channelCount ; channelNumber++) {
        float * const channelBuffer = floatChannelData[channelNumber];
        channelBuffer[i] = value * self.amplitude;
    }   
}

That way any number of channels are handled correctly, too.

like image 783
fbitterlich Avatar asked Feb 22 '16 19:02

fbitterlich


1 Answers

The frequency part is easy: the literal 441.0f in your calculation of theta controls that, so just change it to whatever you want.

For the mono issue, you appear to only be writing one channel of data: p2[i] = value * amplitude; If you're correct about the composition of floatChannelData, then you want this:

float * const * floatChannelData = self.buffer.floatChannelData;
float * const left = floatChannelData[0];
float * const right = floatChannelData[1];

//...

// N.B. Changed the increment
for (int i = 0; i < self.buffer.frameLength ; i++ ) {

    // ...

    left[i] = value * amplitude;
    right[i] = value * amplitude;
}

However, given the increment step in your for loop, it's possible that your buffer is interleaved (left and right channels alternating in the same buffer). In that case, you leave the loop increment, but write to both p2[i] and p2[i+1] on each step (easy for stereo; if you had more channels, you'd do an inner loop over those and write to p2[j] for j from 0 to $NUM_CHANNELS).

like image 92
jscs Avatar answered Oct 16 '22 14:10

jscs