Is it possible to use the synthesised speech from Web Speech API as a SourceNode
inside Web Audio API's audio context?
Conclusion. The Web Speech API is powerful and somewhat underused. However, there are a few annoying bugs and the SpeechRecognition interface is poorly supported. speechSynthesis works surprisingly well once you iron out all of its quirks and issues.
The speech recognition part of the Web Speech API allows authorized Web applications to access the device's microphone and produces a transcript of the voice being recorded. This allows Web applications to use voice as one of the input & control method, similar to touch or keyboard.
speechRecognitionList. addFromString(grammar, 1); We then add the SpeechGrammarList to the speech recognition instance by setting it to the value of the SpeechRecognition grammars property.
I actually asked about adding this on the Web Speech mailing list, and was basically told "no". To be fair to people on that mailing list, I was unable to think of more than one or two specific use cases when prompted.
So unless they've changed something in the past month or so, it sounds like this isn't a planned feature.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With