I'm using the AudioRecord class which constructor is:
AudioRecord(
int audioSource, int sampleRateInHz,
int channelConfig, int audioFormat, int bufferSizeInBytes)
In all (or in the most) of the tutorials and examples I find on the Internet it's recommed to set bufferSizeInBytes like follows:
bufferSizeInBytes= getMinBufferSize (
int sampleRateInHz, int channelConfig, int audioFormat)
Could anyone tell me the reason?
I need to make the correlation between the values I'm recording and a pattern. This pattern is longer than MinBufferSize. So, should I just increase bufferSizeInBytes to the value I prefer, or is that going to worsen the performance of AudioRecord.
Could anyone tell me the reason?
Because what getMinBufferSize
returns for a given configuration is the smallest buffer size you'll be allowed to specify when creating you AudioRecord
.
Why would you want the smallest possible buffer size? To get the lowest possible latency.
Imagine that you're doing something like an SPL meter; you wouldn't want there to be a one-second delay before your UI reacts to a change in the sound pressure.
The buffer size doesn't determine how much data you can request from read()
, though. It's ok to request more data than the size of the AudioRecord's
buffer; read()
simply won't return until all the data you've requested has been read.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With