I'd like to create a custom voice in Mozilla TTS using audio samples I have recorded but am not sure how to get started. The Mozilla TTS project has documentation and tutorials, but I'm having trouble putting the pieces together -- it seems like there's some basic information missing that someone starting out needs to know to get going.
Some questions I have:
metadata.csv
file -- what do I need to put in that file? What do I customize in the config file?scale_stats.npy
file -- how do I generate this?After a lot of research and experimentation, I can share my learnings to answer my own questions.
The Mozilla TTS docker image is really geared for playback and doesn't seem equipped to be used for training. At least, even when running a shell inside the container, I could not get training to work. But after figuring out what was causing PIP to be unhappy, the process of getting Mozilla TTS up and running in Ubuntu turns out to be pretty straightforward.
The documentation for Mozilla TTS doesn't mention anything about virtual environments, but IMHO it really should. Virtual environments ensure that dependencies for different Python-based applications on your machine don't conflict.
I'm running Ubuntu 20.04 on WSL, so Python 3 is already installed. Given that, from within my home folder, here are the commands I used to get a working copy of Mozilla TTS:
sudo apt-get install espeak
git clone https://github.com/mozilla/TTS mozilla-tts
python3 -m venv mozilla-tts
cd mozilla-tts
./bin/pip install -e .
This created a folder called ~/mozilla-tts
in my home folder that contains the Mozilla TTS code. The folder is setup as a virtual environment, which means that as long as I execute python commands via ~/mozilla-tts/bin/python
and PIP via ~/mozilla-tts/bin/pip
, Python will use only the packages that exist in that virtual environment. That eliminates the need to be root when running pip
(since we're not affecting system-wide packages), and it ensures no package conflicts. Score!
For the best results when training a model, you will need:
metadata.csv
file that references each WAV file and indicates what text is spoken in the WAV file.If your source of audio is in a different format than WAV, you will need to use a program like Audacity or SoX to convert the files into WAV format. You should also trim out portions of audio that are just noise, umms, ahs, and other sounds from the speaker that aren't really words you're training on.
If your source of audio isn't perfect (i.e. has some background noise), is in a different format, or happens to be a higher sample rate or different resolution (e.g. 24-bit, 32-bit, etc.), you can perform some clean-up and conversion. Here's a script that is based on an earlier script from the Mozilla TTS Discourse forums:
from pathlib import Path
import os
import subprocess
import soundfile as sf
import pyloudnorm as pyln
import sys
src = sys.argv[1]
rnn = "/PATH/TO/rnnoise_demo"
paths = Path(src).glob("**/*.wav")
for filepath in paths:
target_filepath=Path(str(filepath).replace("original", "converted"))
target_dir=os.path.dirname(target_filepath)
if (str(filepath) == str(target_filepath)):
raise ValueError("Source and target path are identical: " + str(target_filepath))
print("From: " + str(filepath))
print("To: " + str(target_filepath))
# Stereo to Mono; upsample to 48000Hz
subprocess.run(["sox", filepath, "48k.wav", "remix", "-", "rate", "48000"])
subprocess.run(["sox", "48k.wav", "-c", "1", "-r", "48000", "-b", "16", "-e", "signed-integer", "-t", "raw", "temp.raw"]) # convert wav to raw
subprocess.run([rnn, "temp.raw", "rnn.raw"]) # apply rnnoise
subprocess.run(["sox", "-r", "48k", "-b", "16", "-e", "signed-integer", "rnn.raw", "-t", "wav", "rnn.wav"]) # convert raw back to wav
subprocess.run(["mkdir", "-p", str(target_dir)])
subprocess.run(["sox", "rnn.wav", str(target_filepath), "remix", "-", "highpass", "100", "lowpass", "7000", "rate", "22050"]) # apply high/low pass filter and change sr to 22050Hz
data, rate = sf.read(target_filepath)
# peak normalize audio to -1 dB
peak_normalized_audio = pyln.normalize.peak(data, -1.0)
# measure the loudness first
meter = pyln.Meter(rate) # create BS.1770 meter
loudness = meter.integrated_loudness(data)
# loudness normalize audio to -25 dB LUFS
loudness_normalized_audio = pyln.normalize.loudness(data, loudness, -25.0)
sf.write(target_filepath, data=loudness_normalized_audio, samplerate=22050)
print("")
To use the script above, you will need to check out and build the RNNoise project:
sudo apt update
sudo apt-get install build-essential autoconf automake gdb git libffi-dev zlib1g-dev libssl-dev
git clone https://github.com/xiph/rnnoise.git
cd rnnoise
./autogen.sh
./configure
make
You will also need SoX installed:
sudo apt install sox
And, you will need to install pyloudnorm
via ./bin/pip
.
Then, just customize the script so that rnn
points to the path of the rnnoise_demo
command (after building RNNoise, you can find it in the examples
folder). Then, run the script, passing the source path -- the folder where you have your WAV files -- as the first command line argument. Make sure that the word "original" appears somewhere in the path. The script will automatically place the converted files in a corresponding path, with original
changed to converted
; for example, if your source path is /path/to/files/original
, the script will automatically place the converted results in /path/to/files/converted
.
Mozilla TTS supports several different data loaders, but one of the most common is LJSpeech. To use it, we can organize our data set to follow LJSpeech conventions.
First, organize your files so that you have a structure like this:
- metadata.csv
- wavs/
- audio1.wav
- audio2.wav
...
- last_audio.wav
The naming of the audio files doesn't appear to be significant. But, the files must be in a folder called wavs
. You can use sub-folders inside wavs
though, if so desired.
The metadata.csv
file should be in the following format:
audio1|line that's spoken in the first file
audio2|line that's spoken in the second file
last_audio|line that's spoken in the last file
Note that:
wavs/
folder prefix, and without the .wav
suffix.(I did observe that steps in the documentation for Mozilla TTS have you then shuffle the metadata file and then split it into a "training" set (metadata_train.csv
) and "validation" set (metadata_val.csv
), but none of the sample configs provided in the repo are actually configured to use these files. I've filed an issue about that because it's confusing/counter-intuitive to a beginner.)
config.json
FileYou need to prepare a configuration file that describes how your custom TTS will be configured. This file is used by multiple parts of Mozilla TTS when preparing for training, performing training, and generating audio from your custom TTS. Unfortunately, though this file is very important, the documentation for Mozilla TTS largely glosses over how to customize this file.
To start, create a copy of the default Tacotron config.json
file from the Mozilla repo. Then, be sure to customize at least the audio.stats_path
, output_path
, phoneme_cache_path
, and datasets.path
file.
You can customize other parameters if you so choose, but the defaults are a good place to start. For example, you can change the run_name
to control the naming of folders containing your datasets.
Do not change the datasets.name
parameter (leave it set to "ljspeech"); otherwise you'll get strange errors related to an undefined dataset type. It appears that the dataset name
refers to the type of data loader used, rather than what you call your data set. Similarly, I haven't risked changing the model
setting, since I don't yet know how that value gets used by the system.
scale_stats.npy
Most of the training configurations rely on a statistics file called scale_stats.npy
that's generated based on the training set. You can use the ./TTS/bin/compute_statistics.py
script inside the Mozilla TTS repo to generate this file. This script requires your config.json
file as an input, and is a good step to sanity check that everything looks good up to this point.
Here's an example of a command you can run if you are inside the Mozilla TTS folder you created at the start of this tutorial (adjust paths to fit your project):
./bin/python ./TTS/bin/compute_statistics.py --config_path /path/to/your/project/config.json --out_path /path/to/your/project/scale_stats.npy
If successful, this will generate a scale_stats.npy
file under /path/to/your/project/scale_stats.npy
. Be sure that the path in the audio.stats_path
setting of your config.json
file matches this path.
It's now time for the moment of truth -- it's time to start training your model!
Here's an example of a command you can run to train a Tacotron model if you are inside the Mozilla TTS folder you created at the start of this tutorial (adjust paths to fit your project):
./bin/python ./TTS/bin/train_tacotron.py --config_path /path/to/your/project/config.json
This process will take several hours, if not days. If your machine supports CUDA and has it properly configured, the process will run more quickly than if you are just relying on CPU alone.
If you get any errors related to a "signal error" or "signal received", this typically indicates that your machine does not have enough memory for the operation. You can run the training with less parallelism but it will run much more slowly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With