I am trying Speech recognition sample. If I started to recognise my speech via microphone, then I tried to get iPhone's voice of that recognised text. It is working. But, voice is too low. Can u guide me on this?
Rather than, if I am trying in simple button action, with AVSpeechUtterance
code, volume is normal.
After that, If I go for startRecognise()
method, volume is too low.
My Code
func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance() //2
do
{
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try audioSession.setMode(AVAudioSessionModeDefault)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
}
catch
{
print("audioSession properties weren't set because of an error.")
}
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let inputNode = audioEngine.inputNode else {
fatalError("Audio engine has no input node")
}
guard let recognitionRequest = recognitionRequest else {
fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
}
recognitionRequest.shouldReportPartialResults = true
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
if result != nil
{
let lastword = result?.bestTranscription.formattedString.components(separatedBy: " ").last
if lastword == "repeat" || lastword == "Repeat"{
self.myUtterance2 = AVSpeechUtterance(string: "You have spoken repeat")
self.myUtterance2.rate = 0.4
self.myUtterance2.volume = 1.0
self.myUtterance2.pitchMultiplier = 1.0
self.synth1.speak(self.myUtterance2)
// HERE VOICE IS TOO LOW.
}
}
})
let recordingFormat = inputNode.outputFormat(forBus: 0) //11
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
do
{
try audioEngine.start()
}
catch
{
print("audioEngine couldn't start because of an error.")
}
}
My Button Action
func buttonAction()
{
self.myUtterance2 = AVSpeechUtterance(string: "You are in button action")
self.myUtterance2.rate = 0.4
self.myUtterance2.volume = 1.0
self.myUtterance2.pitchMultiplier = 1.0
self.synth1.speak(self.myUtterance2)
// Before going for startRecognise() method,
//I tried with buttonAction(),
//this time volume is normal.
//After startRecognise() method call, volume is too low in both methods.
}
Go to Settings > Sounds (or Settings > Sounds & Haptics), and drag the Ringer and Alerts slider back and forth a few times. If you don't hear any sound, or if your speaker button on the Ringer and Alerts slider is dimmed, your speaker might need service.
To increase or decrease the volume, press the volume buttons on iPhone. To set other audio options, go to Settings > Accessibility > VoiceOver > Audio, then set options such as the following: Sounds & Haptics: Adjust and preview sound effects and haptics.
Audio ducking, a feature that kicks in when you make or receive a Facetime call, lowers the volume of other audio sources so that you can make that Facetime call without having to locate/deactivate the other sound.
After digging into the technical details its observed that, overrideOutputAudioPort()
temporarily changes the current audio route.
func overrideOutputAudioPort(_ portOverride: AVAudioSession.PortOverride) throws
If your app uses the playAndRecord
category, calling this method with the AVAudioSession.PortOverride.speaker option causes audio to be routed to the built-in speaker
and microphone
regardless of other settings.
This change remains in effect only until the current route changes or you call this method again with the AVAudioSession.PortOverride.none option.
try audioSession.setMode(AVAudioSessionModeDefault)
If you would prefer to permanently enable
this behavior, you should instead set the category's defaultToSpeaker
option. Setting this option will always route to the speaker rather than receiver if no other accessory such as headphones are in use.
In Swift 5.x above code looks like -
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playAndRecord)
try audioSession.setMode(.default)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
try audioSession.overrideOutputAudioPort(.speaker)
} catch {
debugPrint("Enable to start audio engine")
return
}
By setting mode to the measurement
, its responsible for minimize the amount of system-supplied signal
processing to input and output signals.
try audioSession.setMode(.measurement)
By commenting this mode and using default
mode responsible for permanently enabling
the audio route to the built-in speaker
and microphone
.
Thanks @McDonal_11 for you answer. Hope this will helps to understand the technical details.
Finally, I got Solution.
func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance() //2
do
{
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try audioSession.setMode(AVAudioSessionModeDefault)
//try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
}
catch
{
print("audioSession properties weren't set because of an error.")
}
...
}
Once I comment this line, try audioSession.setMode(AVAudioSessionModeMeasurement)
, volume is working normal.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With