r9y9 / deepvoice3_pytorch
- пятница, 27 апреля 2018 г. в 00:17:02
Python
PyTorch implementation of convolutional networks-based text-to-speech synthesis models
PyTorch implementation of convolutional networks-based text-to-speech synthesis models:
Audio samples are available at https://r9y9.github.io/deepvoice3_pytorch/.
A notebook supposed to be executed on https://colab.research.google.com is available:
NOTE: pretrained models are not compatible to master. To be updated soon.
URL | Model | Data | Hyper paramters | Git commit | Steps |
---|---|---|---|---|---|
link | DeepVoice3 | LJSpeech | builder=deepvoice3,preset=deepvoice3_ljspeech |
4357976 | 210k ~ |
link | Nyanko | LJSpeech | builder=nyanko,preset=nyanko_ljspeech |
ba59dc7 | 585k |
link | Multi-speaker DeepVoice3 | VCTK | builder=deepvoice3_multispeaker,preset=deepvoice3_vctk |
0421749 | 300k + 300k |
See "Synthesize from a checkpoint" section in the README for how to generate speech samples. Please make sure that you are on the specific git commit noted above.
hparams.py
for details.builder
specifies which model you want to use. deepvoice3
, deepvoice3_multispeaker
[1] and nyanko
[2] are surpprted.Please install packages listed above first, and then
git clone https://github.com/r9y9/deepvoice3_pytorch && cd deepvoice3_pytorch
pip install -e ".[train]"
There are many hyper parameters to be turned depends on what model and data you are working on. For typical datasets and models, parameters that known to work good (preset) are provided in the repository. See presets
directory for details. Notice that
preprocess.py
train.py
synthesis.py
accepts --preset=<json>
optional parameter, which specifies where to load preset parameters. If you are going to use preset parameters, then you must use same --preset=<json>
throughout preprocessing, training and evaluation. e.g.,
python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech
instead of
python preprocess.py ljspeech ~/data/LJSpeech-1.0
# warning! this may use different hyper parameters used at preprocessing stage
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech
Usage:
python preprocess.py ${dataset_name} ${dataset_path} ${out_dir} --preset=<json>
Supported ${dataset_name}
s are:
ljspeech
(en, single speaker)vctk
(en, multi-speaker)jsut
(jp, single speaker)nikl_m
(ko, multi-speaker)nikl_s
(ko, single speaker)Assuming you use preset parameters known to work good for LJSpeech dataset / DeepVoice3 and have data in ~/data/LJSpeech-1.0
, then you can preprocess data by:
python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0/ ./data/ljspeech
When this is done, you will see extracted features (mel-spectrograms and linear spectrograms) in ./data/ljspeech
.
Usage:
python train.py --data-root=${data-root} --preset=<json> --hparams="parameters you may want to override"
Suppose you build a DeepVoice3-style model using LJSpeech dataset, then you can train your model by:
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech/
Model checkpoints (.pth) and alignments (.png) are saved in ./checkpoints
directory per 10000 steps by default.
Pleae check this in advance and follow the commands below.
python preprocess.py nikl_s ${your_nikl_root_path} data/nikl_s --preset=presets/deepvoice3_nikls.json
python train.py --data-root=./data/nikl_s --checkpoint-dir checkpoint_nikl_s --preset=presets/deepvoice3_nikls.json
Logs are dumped in ./log
directory by default. You can monitor logs by tensorboard:
tensorboard --logdir=log
Given a list of text, synthesis.py
synthesize audio signals from trained model. Usage is:
python synthesis.py ${checkpoint_path} ${text_list.txt} ${output_dir} --preset=<json>
Example test_list.txt:
Generative adversarial network or variational auto-encoder.
Once upon a time there was a dear little girl who was loved by every one who looked at her, but most of all by her grandmother, and there was nothing that she would not have given to the child.
A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module.
VCTK and NIKL are supported dataset for building a multi-speaker model.
Since some audio samples in VCTK have long silences that affect performance, it's recommended to do phoneme alignment and remove silences according to vctk_preprocess.
Once you have phoneme alignment for each utterance, you can extract features by:
python preprocess.py vctk ${your_vctk_root_path} ./data/vctk
Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
--preset=presets/deepvoice3_vctk.json \
--log-event-path=log/deepvoice3_multispeaker_vctk_preset
If you want to reuse learned embedding from other dataset, then you can do this instead by:
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
--preset=presets/deepvoice3_vctk.json \
--log-event-path=log/deepvoice3_multispeaker_vctk_preset \
--load-embedding=20171213_deepvoice3_checkpoint_step000210000.pth
This may improve training speed a bit.
You will be able to obtain cleaned-up audio samples in ../nikl_preprocoess. Details are found in here.
Once NIKL corpus is ready to use from the preprocessing, you can extract features by:
python preprocess.py nikl_m ${your_nikl_root_path} data/nikl_m
Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:
python train.py --data-root=./data/nikl_m --checkpoint-dir checkpoint_nikl_m \
--preset=presets/deepvoice3_niklm.json
If you have very limited data, then you can consider to try fine-turn pre-trained model. For example, using pre-trained model on LJSpeech, you can adapt it to data from VCTK speaker p225
(30 mins) by the following command:
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk_adaptation \
--preset=presets/deepvoice3_ljspeech.json \
--log-event-path=log/deepvoice3_vctk_adaptation \
--restore-parts="20171213_deepvoice3_checkpoint_step000210000.pth"
--speaker-id=0
From my experience, it can get reasonable speech quality very quickly rather than training the model from scratch.
There are two important options used above:
--restore-parts=<N>
: It specifies where to load model parameters. The differences from the option --checkpoint=<N>
are 1) --restore-parts=<N>
ignores all invalid parameters, while --checkpoint=<N>
doesn't. 2) --restore-parts=<N>
tell trainer to start from 0-step, while --checkpoint=<N>
tell trainer to continue from last step. --checkpoint=<N>
should be ok if you are using exactly same model and continue to train, but it would be useful if you want to customize your model architecture and take advantages of pre-trained model.--speaker-id=<N>
: It specifies what speaker of data is used for training. This should only be specified if you are using multi-speaker dataset. As for VCTK, speaker id is automatically assigned incrementally (0, 1, ..., 107) according to the speaker_info.txt
in the dataset.Part of code was adapted from the following projects: