Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance between CoreAudio and AVFoundation

I’ve a question about CoreAudio and AVFoundation.

I built an pro audio application using CoreAudio with an AUGraph and AudioUnit.

I would like to switch to AVFoundation framework which seems to be really great. But since I’m worried about the performance I would like to know a little bit more about it.

In my core audio render callback I can process 512 samples with a sample rate of 44100kHz so my callback is called every 10ms, and I think it could easily go faster (am I right?).

Now in AVFoundation, the render callback is the Tap of the AVAudioNode. And I read in the comment that the bufferSize parameter is the requested size of the incoming buffers in sample frames. Supported range is [100, 400] ms. So does it mean that I won’t be able to process less than 4410 samples on each calls?

Does the restriction comes from the Objective-C constraints (message calls, lock and so on) ?

Won’t it make an impact on real-time DSP process?

like image 352
DEADBEEF Avatar asked Aug 11 '17 21:08

DEADBEEF


1 Answers

In my experiments using the iOS AVAudioEngine API (iOS 10.3.3), I indeed found that installing a tap on an AVAudioNode bus would not deliver buffers shorter than 4410 samples on my iPhone 7. This may be because a AVAudioEngine tap delivers buffers to a lower priority thread than do CoreAudio Audio Unit callbacks, so thus can't get called reliably as often, resulting in the higher latency.

However, one can create an V3 AUAudioUnit subclass, with the buffers received (by the instance's internalRenderBlock for output) configured from 512 down to as short as 64 samples on an iPhone 7. Calling setPreferredIOBufferDuration on the audio session seems to set the preferred AUAudioUnit render block buffer size. I posted some of my test code (mixed Swift 3 plus Objective C) for creating what I think is a working low-latency V3 AUAudioUnit tone generator subclass here. One does need to understand and follow real-time coding restrictions (no method calls, locks, memory allocations, etc.) inside the render block, so plain C for the audio context code inside the block seems best (perhaps even mandatory).

For low latency microphone input with equally short buffers, you can try connecting your audio unit subclass to the audioEngine's inputNode, and then calling the input's AURenderPullInputBlock inside your units render block.

like image 145
hotpaw2 Avatar answered Nov 22 '22 09:11

hotpaw2