I just don't know how to do it...
I search here and at google and people talked about the AVSpeechSynthesizerDelegate but I wasn't able to use it.
I want to run a function exactly when the speech is over.
How can I achieve this? If I must use the delegate, how should I do it?
I tried that way:
func speechSynthesizer(synthesizer: AVSpeechSynthesizer, didFinishSpeechUtterance utterance: AVSpeechUtterance) {
falando = false
print("FINISHED")
}
This was one of the functions I found on the developer's doc, although the speech was told and nothing was printed.
I tried to put Class A : AVSpeechSynthesizerDelegate so then I would do Speech.delegate = self (Speech is an attribute of A, of type AVSpeechSynthesizer) but it said A does not conform to protocol NSObjectProtocol.
How can I run some function (even a print) as soon as the speech is over?
Thank you!
Create a Speaker class that inherits from NSObject and ObservableObject.
internal class Speaker: NSObject, ObservableObject {
internal var errorDescription: String? = nil
private let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer()
@Published var isSpeaking: Bool = false
@Published var isShowingSpeakingErrorAlert: Bool = false
override init() {
super.init()
self.synthesizer.delegate = self
}
internal func speak(_ text: String, language: String) {
do {
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(language: language)
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try AVAudioSession.sharedInstance().setActive(true)
self.synthesizer.speak(utterance)
} catch let error {
self.errorDescription = error.localizedDescription
isShowingSpeakingErrorAlert.toggle()
}
}
internal func stop() {
self.synthesizer.stopSpeaking(at: .immediate)
}
}
Extend it and implement the necessary delegate methods.
extension Speaker: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterance) {
self.isSpeaking = true
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didCancel utterance: AVSpeechUtterance) {
self.isSpeaking = false
try? AVAudioSession.sharedInstance().setActive(false, options: .notifyOthersOnDeactivation)
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
self.isSpeaking = false
try? AVAudioSession.sharedInstance().setActive(false, options: .notifyOthersOnDeactivation)
}
}
Add Speaker to the necessary view using the StateObject wrapper.
struct ContentView: View {
let text: String = "Hello World!"
@StateObject var speaker: Speaker = Speaker()
var body: some View {
HStack {
Text(text)
Spacer()
Button(action: {
if self.speaker.isSpeaking {
speaker.stop()
} else {
speaker.speak(text, language: "en-US")
}
}) {
Image(systemName: self.speaker.isSpeaking ? "stop.circle" : "speaker.wave.2.circle")
.resizable()
.frame(width: 30, height: 30)
}
.buttonStyle(BorderlessButtonStyle())
.alert(isPresented: $speaker.isShowingSpeakingErrorAlert) {
Alert(title: Text("Pronunciation error", comment: "Pronunciation error alert title."), message: Text(speaker.errorDescription ?? ""))
}
}
.padding()
}
}
A does not conform to protocol NSObjectProtocol
means that your class must inherit from NSObject, you can read more about it here.
Now I don't know how you've structured your code, but this little example seems to work for me. First a dead simple class that holds the AVSpeechSynthesizer
:
class Speaker: NSObject {
let synth = AVSpeechSynthesizer()
override init() {
super.init()
synth.delegate = self
}
func speak(_ string: String) {
let utterance = AVSpeechUtterance(string: string)
synth.speakUtterance(utterance)
}
}
Notice that I set the delegate here (in the init
method) and notice that it must inherit from NSObject
to keep the compiler happy (very important!)
And then the actual delegate method:
extension Speaker: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
print("all done")
}
}
And finally, I can use that class here, like so:
class ViewController: UIViewController {
let speaker = Speaker()
@IBAction func buttonTapped(sender: UIButton) {
speaker.speak("Hello world")
}
}
Which rewards me with
all done
in my console when the AVSpeechSynthesizer
has stopped speaking.
Hope that helps you.
So, time passes and in the comments below @case-silva asked if there was a practical example and @dima-gershman suggested to just use the AVSpeectSynthesizer
directly in the ViewController
.
To accommodate both, I've made a simple ViewController
example here with a UITextField
and a UIButton
.
The flow is:
Here's how it looks
UIViewController
Exampleimport UIKit
import AVFoundation
class ViewController: UIViewController {
//MARK: Outlets
@IBOutlet weak var textField: UITextField!
@IBOutlet weak var speakButton: UIButton!
let synth = AVSpeechSynthesizer()
override func viewDidLoad() {
super.viewDidLoad()
synth.delegate = self
}
@IBAction func speakButtonTapped(_ sender: UIButton) {
//We're ready to start speaking, disable UI while we're speaking
view.backgroundColor = .darkGray
speakButton.isEnabled = false
let inputText = textField.text ?? ""
let textToSpeak = inputText.isEmpty ? "Please enter some text" : inputText
let speakUtterance = AVSpeechUtterance(string: textToSpeak)
synth.speak(speakUtterance)
}
}
extension ViewController: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
//Speaking is done, enable speech UI for next round
speakButton.isEnabled = true
view.backgroundColor = .lightGray
textField.text = ""
}
}
Hope that gives you a clue Case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With