PyTorch implementation of convolutional networks-based text-to-speech synthesis models.
Project description
Deepvoice3\_pytorch
===================
|Build Status|
PyTorch implementation of convolutional networks-based text-to-speech
synthesis models:
1. `arXiv:1710.07654 <https://arxiv.org/abs/1710.07654>`__: Deep Voice
3: 2000-Speaker Neural Text-to-Speech.
2. `arXiv:1710.08969 <https://arxiv.org/abs/1710.08969>`__: Efficiently
Trainable Text-to-Speech System Based on Deep Convolutional Networks
with Guided Attention.
Audio sampels are available at
https://r9y9.github.io/deepvoice3\_pytorch/.
Highlights
----------
- Convolutional sequence-to-sequence model with attention for
text-to-speech synthesis
- Multi-speaker and single speaker versions of DeepVoice3
- Audio samples and pre-trained models
- Preprocessor for `LJSpeech
(en) <https://keithito.com/LJ-Speech-Dataset/>`__, `JSUT
(jp) <https://sites.google.com/site/shinnosuketakamichi/publication/jsut>`__
and
`VCTK <http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html>`__
datasets
- Language-dependent frontend text processor for English and Japanese
Pretrained models
-----------------
+-----+----------+---------+----------------------------------+----------------+-------+
| URL | Model | Data | Hyper paramters | Git commit | Steps |
+=====+==========+=========+==================================+================+=======+
| `li | DeepVoic | LJSpeec | ``builder=deepvoice3,preset=deep | `4357976 <http | 21k ~ |
| nk | e3 | h | voice3_ljspeech`` | s://github.com | |
| <ht | | | | /r9y9/deepvoic | |
| tps | | | | e3_pytorch/tre | |
| :// | | | | e/43579764f35d | |
| www | | | | e6b8bac2b18b52 | |
| .dr | | | | a06e4e11b705b2 | |
| opb | | | | >`__ | |
| ox. | | | | | |
| com | | | | | |
| /s/ | | | | | |
| cs6 | | | | | |
| d07 | | | | | |
| 0om | | | | | |
| my2 | | | | | |
| lmh | | | | | |
| /20 | | | | | |
| 171 | | | | | |
| 213 | | | | | |
| _de | | | | | |
| epv | | | | | |
| oic | | | | | |
| e3_ | | | | | |
| che | | | | | |
| ckp | | | | | |
| oin | | | | | |
| t_s | | | | | |
| tep | | | | | |
| 000 | | | | | |
| 210 | | | | | |
| 000 | | | | | |
| .pt | | | | | |
| h?d | | | | | |
| l=0 | | | | | |
| >`_ | | | | | |
| _ | | | | | |
+-----+----------+---------+----------------------------------+----------------+-------+
| `li | Nyanko | LJSpeec | ``builder=nyanko,preset=nyanko_l | `ba59dc7 <http | 58.5k |
| nk | | h | jspeech`` | s://github.com | |
| <ht | | | | /r9y9/deepvoic | |
| tps | | | | e3_pytorch/tre | |
| :// | | | | e/ba59dc75374c | |
| www | | | | a3189281f60282 | |
| .dr | | | | 01c15066830116 | |
| opb | | | | >`__ | |
| ox. | | | | | |
| com | | | | | |
| /s/ | | | | | |
| 1y8 | | | | | |
| bt6 | | | | | |
| bng | | | | | |
| gbz | | | | | |
| zlp | | | | | |
| /20 | | | | | |
| 171 | | | | | |
| 129 | | | | | |
| _ny | | | | | |
| ank | | | | | |
| o_c | | | | | |
| hec | | | | | |
| kpo | | | | | |
| int | | | | | |
| _st | | | | | |
| ep0 | | | | | |
| 005 | | | | | |
| 850 | | | | | |
| 00. | | | | | |
| pth | | | | | |
| ?dl | | | | | |
| =0> | | | | | |
| `__ | | | | | |
+-----+----------+---------+----------------------------------+----------------+-------+
| `li | Multi-sp | VCTK | ``builder=deepvoice3_vctk,preset | `0421749 <http | 30k + |
| nk | eaker | | =deepvoice3_vctk`` | s://github.com | 30k |
| <ht | DeepVoic | | | /r9y9/deepvoic | |
| tps | e3 | | | e3_pytorch/tre | |
| :// | | | | e/0421749af908 | |
| www | | | | 905d181f089f06 | |
| .dr | | | | 956fddd0982d47 | |
| opb | | | | >`__ | |
| ox. | | | | | |
| com | | | | | |
| /s/ | | | | | |
| uzm | | | | | |
| tzg | | | | | |
| ced | | | | | |
| yu5 | | | | | |
| 31k | | | | | |
| /20 | | | | | |
| 171 | | | | | |
| 222 | | | | | |
| _de | | | | | |
| epv | | | | | |
| oic | | | | | |
| e3_ | | | | | |
| vct | | | | | |
| k10 | | | | | |
| 8_c | | | | | |
| hec | | | | | |
| kpo | | | | | |
| int | | | | | |
| _st | | | | | |
| ep0 | | | | | |
| 003 | | | | | |
| 000 | | | | | |
| 00. | | | | | |
| pth | | | | | |
| ?dl | | | | | |
| =0> | | | | | |
| `__ | | | | | |
+-----+----------+---------+----------------------------------+----------------+-------+
See "Synthesize from a checkpoint" section in the README for how to
generate speech samples. Please make sure that you are on the specific
git commit noted above.
Notes on hyper parameters
-------------------------
- Default hyper parameters, used during
preprocessing/training/synthesis stages, are turned for English TTS
using LJSpeech dataset. You will have to change some of parameters if
you want to try other datasets. See ``hparams.py`` for details.
- ``builder`` specifies which model you want to use. ``deepvoice3``,
``deepvoice3_multispeaker`` [1] and ``nyanko`` [2] are surpprted.
- ``presets`` represents hyper parameters known to work well for
particular dataset/model from my experiments. Before you try to find
your best parameters, I would recommend you to try those presets by
setting ``preset= {dataset_name} {out_dir}
Supported `` {data-root} --hparams="parameters you want to override"
Suppose you will want to build a DeepVoice3-style model using LJSpeech
dataset with default hyper parameters, then you can train your model by:
::
python train.py --data-root=./data/ljspeech/ --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech"
Model checkpoints (.pth) and alignments (.png) are saved in
``./checkpoints`` directory per 5000 steps by default.
If you are building a Japaneses TTS model, then for example,
::
python train.py --data-root=./data/jsut --hparams="frontend=jp" --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech"
``frontend=jp`` tell the training script to use Japanese text processing
frontend. Default is ``en`` and uses English text processing frontend.
Note that there are many hyper parameters and design choices. Some are
configurable by ``hparams.py`` and some are hardcoded in the source
(e.g., dilation factor for each convolution layer). If you find better
hyper parameters, please let me know!
4. Moniter with Tensorboard
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Logs are dumped in ``./log`` directory by default. You can monitor logs
by tensorboard:
::
tensorboard --logdir=log
5. Synthesize from a checkpoint
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Given a list of text, ``synthesis.py`` synthesize audio signals from
trained model. Usage is:
::
python synthesis.py {text_list.txt} {your_vctk_root_path} ./data/vctk
Now that you have data prepared, then you can train a multi-speaker
version of DeepVoice3 by:
::
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
--hparams="preset=deepvoice3_vctk,builder=deepvoice3_multispeaker" \
--log-event-path=log/deepvoice3_multispeaker_vctk_preset
If you want to reuse learned embedding from other dataset, then you can
do this instead by:
::
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
--hparams="preset=deepvoice3_vctk,builder=deepvoice3_multispeaker" \
--log-event-path=log/deepvoice3_multispeaker_vctk_preset \
--load-embedding=20171213_deepvoice3_checkpoint_step000210000.pth
This may improve training speed a bit.
Speaker adaptation
~~~~~~~~~~~~~~~~~~
If you have very limited data, then you can consider to try fine-turn
pre-trained model. For example, using pre-trained model on LJSpeech, you
can adapt it to data from VCTK speaker ``p225`` (30 mins) by the
following command:
::
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk_adaptation \
--hparams="builder=deepvoice3,preset=deepvoice3_ljspeech" \
--log-event-path=log/deepvoice3_vctk_adaptation \
--restore-parts="20171213_deepvoice3_checkpoint_step000210000.pth"
--speaker-id=0
From my experience, it can get reasonable speech quality very quickly
rather than training the model from scratch.
There are two important options used above:
- ``--restore-parts=<N>``: It specifies where to load model parameters.
The differences from the option ``--checkpoint=<N>`` are 1)
``--restore-parts=<N>`` ignores all invalid parameters, while
``--checkpoint=<N>`` doesn't. 2) ``--restore-parts=<N>`` tell trainer
to start from 0-step, while ``--checkpoint=<N>`` tell trainer to
continue from last step. ``--checkpoint=<N>`` should be ok if you are
using exactly same model and continue to train, but it would be
useful if you want to customize your model architecture and take
advantages of pre-trained model.
- ``--speaker-id=<N>``: It specifies what speaker of data is used for
training. This should only be specified if you are using
multi-speaker dataset. As for VCTK, speaker id is automatically
assigned incrementally (0, 1, ..., 107) according to the
``speaker_info.txt`` in the dataset.
Acknowledgements
----------------
Part of code was adapted from the following projects:
- https://github.com/keithito/tacotron
- https://github.com/facebookresearch/fairseq-py
.. |Build Status| image:: https://travis-ci.org/r9y9/deepvoice3_pytorch.svg?branch=master
:target: https://travis-ci.org/r9y9/deepvoice3_pytorch
===================
|Build Status|
PyTorch implementation of convolutional networks-based text-to-speech
synthesis models:
1. `arXiv:1710.07654 <https://arxiv.org/abs/1710.07654>`__: Deep Voice
3: 2000-Speaker Neural Text-to-Speech.
2. `arXiv:1710.08969 <https://arxiv.org/abs/1710.08969>`__: Efficiently
Trainable Text-to-Speech System Based on Deep Convolutional Networks
with Guided Attention.
Audio sampels are available at
https://r9y9.github.io/deepvoice3\_pytorch/.
Highlights
----------
- Convolutional sequence-to-sequence model with attention for
text-to-speech synthesis
- Multi-speaker and single speaker versions of DeepVoice3
- Audio samples and pre-trained models
- Preprocessor for `LJSpeech
(en) <https://keithito.com/LJ-Speech-Dataset/>`__, `JSUT
(jp) <https://sites.google.com/site/shinnosuketakamichi/publication/jsut>`__
and
`VCTK <http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html>`__
datasets
- Language-dependent frontend text processor for English and Japanese
Pretrained models
-----------------
+-----+----------+---------+----------------------------------+----------------+-------+
| URL | Model | Data | Hyper paramters | Git commit | Steps |
+=====+==========+=========+==================================+================+=======+
| `li | DeepVoic | LJSpeec | ``builder=deepvoice3,preset=deep | `4357976 <http | 21k ~ |
| nk | e3 | h | voice3_ljspeech`` | s://github.com | |
| <ht | | | | /r9y9/deepvoic | |
| tps | | | | e3_pytorch/tre | |
| :// | | | | e/43579764f35d | |
| www | | | | e6b8bac2b18b52 | |
| .dr | | | | a06e4e11b705b2 | |
| opb | | | | >`__ | |
| ox. | | | | | |
| com | | | | | |
| /s/ | | | | | |
| cs6 | | | | | |
| d07 | | | | | |
| 0om | | | | | |
| my2 | | | | | |
| lmh | | | | | |
| /20 | | | | | |
| 171 | | | | | |
| 213 | | | | | |
| _de | | | | | |
| epv | | | | | |
| oic | | | | | |
| e3_ | | | | | |
| che | | | | | |
| ckp | | | | | |
| oin | | | | | |
| t_s | | | | | |
| tep | | | | | |
| 000 | | | | | |
| 210 | | | | | |
| 000 | | | | | |
| .pt | | | | | |
| h?d | | | | | |
| l=0 | | | | | |
| >`_ | | | | | |
| _ | | | | | |
+-----+----------+---------+----------------------------------+----------------+-------+
| `li | Nyanko | LJSpeec | ``builder=nyanko,preset=nyanko_l | `ba59dc7 <http | 58.5k |
| nk | | h | jspeech`` | s://github.com | |
| <ht | | | | /r9y9/deepvoic | |
| tps | | | | e3_pytorch/tre | |
| :// | | | | e/ba59dc75374c | |
| www | | | | a3189281f60282 | |
| .dr | | | | 01c15066830116 | |
| opb | | | | >`__ | |
| ox. | | | | | |
| com | | | | | |
| /s/ | | | | | |
| 1y8 | | | | | |
| bt6 | | | | | |
| bng | | | | | |
| gbz | | | | | |
| zlp | | | | | |
| /20 | | | | | |
| 171 | | | | | |
| 129 | | | | | |
| _ny | | | | | |
| ank | | | | | |
| o_c | | | | | |
| hec | | | | | |
| kpo | | | | | |
| int | | | | | |
| _st | | | | | |
| ep0 | | | | | |
| 005 | | | | | |
| 850 | | | | | |
| 00. | | | | | |
| pth | | | | | |
| ?dl | | | | | |
| =0> | | | | | |
| `__ | | | | | |
+-----+----------+---------+----------------------------------+----------------+-------+
| `li | Multi-sp | VCTK | ``builder=deepvoice3_vctk,preset | `0421749 <http | 30k + |
| nk | eaker | | =deepvoice3_vctk`` | s://github.com | 30k |
| <ht | DeepVoic | | | /r9y9/deepvoic | |
| tps | e3 | | | e3_pytorch/tre | |
| :// | | | | e/0421749af908 | |
| www | | | | 905d181f089f06 | |
| .dr | | | | 956fddd0982d47 | |
| opb | | | | >`__ | |
| ox. | | | | | |
| com | | | | | |
| /s/ | | | | | |
| uzm | | | | | |
| tzg | | | | | |
| ced | | | | | |
| yu5 | | | | | |
| 31k | | | | | |
| /20 | | | | | |
| 171 | | | | | |
| 222 | | | | | |
| _de | | | | | |
| epv | | | | | |
| oic | | | | | |
| e3_ | | | | | |
| vct | | | | | |
| k10 | | | | | |
| 8_c | | | | | |
| hec | | | | | |
| kpo | | | | | |
| int | | | | | |
| _st | | | | | |
| ep0 | | | | | |
| 003 | | | | | |
| 000 | | | | | |
| 00. | | | | | |
| pth | | | | | |
| ?dl | | | | | |
| =0> | | | | | |
| `__ | | | | | |
+-----+----------+---------+----------------------------------+----------------+-------+
See "Synthesize from a checkpoint" section in the README for how to
generate speech samples. Please make sure that you are on the specific
git commit noted above.
Notes on hyper parameters
-------------------------
- Default hyper parameters, used during
preprocessing/training/synthesis stages, are turned for English TTS
using LJSpeech dataset. You will have to change some of parameters if
you want to try other datasets. See ``hparams.py`` for details.
- ``builder`` specifies which model you want to use. ``deepvoice3``,
``deepvoice3_multispeaker`` [1] and ``nyanko`` [2] are surpprted.
- ``presets`` represents hyper parameters known to work well for
particular dataset/model from my experiments. Before you try to find
your best parameters, I would recommend you to try those presets by
setting ``preset=
Supported ``
Suppose you will want to build a DeepVoice3-style model using LJSpeech
dataset with default hyper parameters, then you can train your model by:
::
python train.py --data-root=./data/ljspeech/ --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech"
Model checkpoints (.pth) and alignments (.png) are saved in
``./checkpoints`` directory per 5000 steps by default.
If you are building a Japaneses TTS model, then for example,
::
python train.py --data-root=./data/jsut --hparams="frontend=jp" --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech"
``frontend=jp`` tell the training script to use Japanese text processing
frontend. Default is ``en`` and uses English text processing frontend.
Note that there are many hyper parameters and design choices. Some are
configurable by ``hparams.py`` and some are hardcoded in the source
(e.g., dilation factor for each convolution layer). If you find better
hyper parameters, please let me know!
4. Moniter with Tensorboard
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Logs are dumped in ``./log`` directory by default. You can monitor logs
by tensorboard:
::
tensorboard --logdir=log
5. Synthesize from a checkpoint
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Given a list of text, ``synthesis.py`` synthesize audio signals from
trained model. Usage is:
::
python synthesis.py
Now that you have data prepared, then you can train a multi-speaker
version of DeepVoice3 by:
::
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
--hparams="preset=deepvoice3_vctk,builder=deepvoice3_multispeaker" \
--log-event-path=log/deepvoice3_multispeaker_vctk_preset
If you want to reuse learned embedding from other dataset, then you can
do this instead by:
::
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
--hparams="preset=deepvoice3_vctk,builder=deepvoice3_multispeaker" \
--log-event-path=log/deepvoice3_multispeaker_vctk_preset \
--load-embedding=20171213_deepvoice3_checkpoint_step000210000.pth
This may improve training speed a bit.
Speaker adaptation
~~~~~~~~~~~~~~~~~~
If you have very limited data, then you can consider to try fine-turn
pre-trained model. For example, using pre-trained model on LJSpeech, you
can adapt it to data from VCTK speaker ``p225`` (30 mins) by the
following command:
::
python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk_adaptation \
--hparams="builder=deepvoice3,preset=deepvoice3_ljspeech" \
--log-event-path=log/deepvoice3_vctk_adaptation \
--restore-parts="20171213_deepvoice3_checkpoint_step000210000.pth"
--speaker-id=0
From my experience, it can get reasonable speech quality very quickly
rather than training the model from scratch.
There are two important options used above:
- ``--restore-parts=<N>``: It specifies where to load model parameters.
The differences from the option ``--checkpoint=<N>`` are 1)
``--restore-parts=<N>`` ignores all invalid parameters, while
``--checkpoint=<N>`` doesn't. 2) ``--restore-parts=<N>`` tell trainer
to start from 0-step, while ``--checkpoint=<N>`` tell trainer to
continue from last step. ``--checkpoint=<N>`` should be ok if you are
using exactly same model and continue to train, but it would be
useful if you want to customize your model architecture and take
advantages of pre-trained model.
- ``--speaker-id=<N>``: It specifies what speaker of data is used for
training. This should only be specified if you are using
multi-speaker dataset. As for VCTK, speaker id is automatically
assigned incrementally (0, 1, ..., 107) according to the
``speaker_info.txt`` in the dataset.
Acknowledgements
----------------
Part of code was adapted from the following projects:
- https://github.com/keithito/tacotron
- https://github.com/facebookresearch/fairseq-py
.. |Build Status| image:: https://travis-ci.org/r9y9/deepvoice3_pytorch.svg?branch=master
:target: https://travis-ci.org/r9y9/deepvoice3_pytorch