I've just written some iOS code that uses Audio Units to get a mono float stream from the microphone at the hardware sampling rate.
It's ended up being quite a lot of code! First I have to set up an audio session, specifying a desired sample rate of 48kHz. I then have to start the session and inspect the sample rate that was actually returned. This will be the actual hardware sampling rate. I then have to set up an audio unit, implementing a render callback.
But I am at least able to use the hardware sampling rate (so I can be certain that there is no information is lost through software re-sampling). And also I am able to set the smallest possible buffer size, so that I achieve minimal latency.
What is the analogous process on android?
How can I get down to the wire?
PS Nobody has mentioned it yet but it appears to be possible to work at the JNI level.
The AudioRecord
class should be able to help you do what you need from the Java/Kotlin side of things. This will give you raw PCM data at the sampling rate you requested (assuming the hardware supports it.) It's up to your app to read the data out of the AudioRecord
class in an efficient and timely manner so it does not overflow the buffer and drop data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With