I have a WebRTC iOS application.
There I have AVAudioSession
and RTCAudioSource
.
I need to detect when a microphone starts receiving loud sounds (like when a person starts speaking) similar to what hark does in the browser with AudioContext.
How can I detect it or get something that resembles stream that can be measured like AVCaptureAudioChannel
or AVCaptureAudioDataOutput
?
After using AVAudioSession
to request permission to record audio, I would recommending using AVAudioRecorder. It is a fairly straightforward class and is as simple as:
AVAudioRecorder
prepareToRecord
on the instancemeteringEnabled
After enabling the recording, you can access the recording volume measurement using the method averagePowerForChannel:
.
You may want to read Apple's documentation
~~~~~~~~~~~~~~~~~~~~~~~ N O T E
~~~~~~~~~~~~~~~~~~~~~~~
I am not familiar with the WebRTC framework/functionality, but the AVAudioRecorder class will provide you with the ability to measure the audio input during a recording.
~~~~~~~~~~~~~~~~~~~~~~ S A M P L E
~~~~~~~~~~~~~~~~~~~~~~
I've included a GitHub sample project that I've used in the past. It is setup to detect sensitivity of the audio using the AVAudioRecorder class that I've described.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With