Is Speech Synthesis API supported by Chromium? Do I need to install voices? If so how can I do that? I'm using Fedora. Is voices like video that I need to install extra package for it to work?
I've tried this code:
var msg = new SpeechSynthesisUtterance('I see dead people!');
msg.voice = speechSynthesis.getVoices().filter(function(voice) {
return voice.name == 'Whisper';
})[0];
speechSynthesis.speak(msg);
from article Web apps that talk - Introduction to the Speech Synthesis API
but the function speechSynthesis.getVoices() return empty array.
I've also tried:
window.speechSynthesis.onvoiceschanged = function() {
console.log(window.speechSynthesis.getVoices())
};
the function get executed but the array is also empty.
On https://fedoraproject.org/wiki/Chromium there is info to use --enable-speech-dispatcher
flag but when I've use it I've got warning that flag is not supported.
Is Speech Synthesis API supported by Chromium?
Yes, the Web Speech API has basic support at Chromium browser, though there are several issues with both Chromium and Firefox implementation of the specification, see see Blink>Speech, Internals>SpeechSynthesis, Web Speech.
Do I need to install voices? If so how can I do that? I'm using Fedora. Is voices like video that I need to install extra package for it to work?
Yes, voices need to be installed. Chromium is not shipped with voices to set at SpeechSynthesisUtterance
voice
attribute by default, see How to use Web Speech API at chromium?; How to capture generated audio from window.speechSynthesis.speak() call?.
You can install speech-dispatcher
as a server for the system speech synthesis server and espeak
as the speech synthesizer.
$ yum install speech-dispatcher espeak
You can also set a configuration file for speech-dispatcher
in the user home folder to set specific options for both speech-dispatcher
and the output module that your use, for example espeak
$ spd-conf -u
Launching Chromium with --enable-speech-dispatcher
flag automatically spawns a connection to speech-dispatcher
, where you can set the LogLevel
between 0
and 5
to review SSIP communication between Chromium code and speech-dispatcher
.
.getVoices()
returns results asynchronously and needs to be called twice
see this electron
issue at GitHub Speech Synthesis: No Voices #586.
window.speechSynthesis.onvoiceschanged = e => {
const voices = window.speechSynthesis.getVoices();
// do speech synthesis stuff
console.log(voices);
}
window.speechSynthesis.getVoices();
or composed as an asynchronous function which returns a Promise
with value being array of voices
(async() => {
const getVoices = (voiceName = "") => {
return new Promise(resolve => {
window.speechSynthesis.onvoiceschanged = e => {
// optionally filter returned voice by `voiceName`
// resolve(
// window.speechSynthesis.getVoices()
// .filter(({name}) => /^en.+whisper/.test(name))
// );
resolve(window.speechSynthesis.getVoices());
}
window.speechSynthesis.getVoices();
})
}
const voices = await getVoices();
console.log(voices);
})();
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With