Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Core Audio iOS: Retrieving the past & future timestamps for captured & to-be-rendered samples

Question

I have a very simple iOS Core Audio application with the following structure:

Remote I/O Unit Input Bus --> Render Callback --> Remote I/O Unit Output Bus

The render callback function, invoked by the Remote I/O output bus, pulls samples from the input hardware by calling AudioUnitRender() on the Remote I/O input bus. It then processes/affects these samples, writing to the supplied AudioBufferList* and returns, causing the affected samples to be played via the output hardware. All works well.

My question is how can I know, or calculate, the precise time at which:

  • The samples were captured by the input hardware
  • The samples were actually played on the output hardware

Discussion

An AudioTimeStamp struct is passed into the render callback with valid mHostTime, mSampleTime & mRateScalar values. It is not clear to me exactly what this time stamp reflects. The documentation states:

inTimeStamp The timestamp associated with this call of audio unit render.

This sounds like it represents the time the render was invoked, but how does that relate (if at all) to the time at which the input samples were captured and the output samples will be rendered?

Several resources online speak of using mach_absolute_time() or CACurrentMediaTime() to calculate the current host time, however again I can't seem to make the connection from current host time to past or future host time.

The following quote from an Apple mailing list thread talks of three time stamps, including a separate time stamp for both input data in the past and output data in the future. This is exactly what I am looking for, however I believe this is running on OS X and using the AUHAL I/O. I cannot find a way of retrieving these time stamps on iOS.

So, the way CoreAudio works is that an I/O proc fires and gives you 3 time stamps: (1) Is the time stamp of the input data - if any of course. This will always be at least a buffer size in the past (2) Is the time stamp for now - when the I/O proc was woken up to run (3) Is the time stamp for the output data you will provide. This will is always some time in the future - usually it is a buffer size in the future. (http://lists.apple.com/archives/coreaudio-api/2005/Sep/msg00220.html)

I suspect I may be missing something obvious so hopefully someone can shed some light on this.

Thanks in advance.

like image 831
Andy Barnard Avatar asked Nov 11 '22 19:11

Andy Barnard


1 Answers

If you're trying to account for actual capture time and actual output time, maybe you could inspect the hardware latency properties in the audio session. The audio units also have a latency property. Not sure if this will give you the accuracy you're looking for.

like image 142
invalidname Avatar answered Nov 15 '22 05:11

invalidname