I am successfully able to use Speech (speech recognition) and I can use AVFoundation to play wav files in Xcode 8/IOS 10. I just can't use them both together. I have working speech recognition code where I import Speech. When I import AVFoundation into the same app and use the following code, there is no sound and no errors are generated:
var audioPlayer: AVAudioPlayer!
func playAudio() {
let path = Bundle.main.path(forResource: "file.wav", ofType: nil)!
let url = URL(fileURLWithPath: path)
do {
let sound = try AVAudioPlayer(contentsOf: url)
audioPlayer = sound
sound.play()
} catch {
//handle error
}
}
I assume it is because both use audio. Can anyone suggest how to use both in the same app? I also find that I cannot use speech recognition and text-to-speech together in the same app.
I just bumped into the same problem, and here is how I solved,
add the following line when Speech Recognition is done. What it does is basically setting the audio session back to AVAudioSessionCategoryPlayback category.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayback)
try audioSession.setActive(false, with: .notifyOthersOnDeactivation)
} catch {
// handle errors
}
hope it helps.
You should change this line:
try audioSession.setCategory(AVAudioSessionCategoryPlayback)
on to:
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
This should work ;-)
It seems that AVAudioPlayer
stops playing the sample if you're using AVAudioSession
to record the microphone as in Apple's speech recognition example.
However, I've managed to circumvent this by using AVCaptureSession
to capture audio as described in this answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With