README.md 13.3 KB
Newer Older
Yoach Lacombe's avatar
Yoach Lacombe committed
1
2
# Training Parler-TTS

3
<a target="_blank" href="https://github.com/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb"> 
Yoach Lacombe's avatar
Yoach Lacombe committed
4
5
6
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> 
</a>

Yoach Lacombe's avatar
Yoach Lacombe committed
7
**TL;DR:** After having followed the [installation steps](#requirements), you can reproduce the [Parler-TTS Mini v1](https://huggingface.co/parler-tts/parler-tts-mini-v1) training recipe with the following command line:
Yoach Lacombe's avatar
Yoach Lacombe committed
8
9

```sh
Yoach Lacombe's avatar
Yoach Lacombe committed
10
accelerate launch ./training/run_parler_tts_training.py ./helpers/training_configs/starting_point_v1.json
Yoach Lacombe's avatar
Yoach Lacombe committed
11
12
```

Yoach Lacombe's avatar
Yoach Lacombe committed
13
14
-------------

15
This sub-folder contains all the information to train or fine-tune your own Parler-TTS model. It consists of:
Yoach Lacombe's avatar
Yoach Lacombe committed
16
17
18
- [1. An introduction to the Parler-TTS architecture](#a-architecture)
- [2. First steps to get started](#b-getting-started)
- [3. Training guide](#c-training)
Yoach Lacombe's avatar
Yoach Lacombe committed
19

Yoach Lacombe's avatar
Yoach Lacombe committed
20
> [!IMPORTANT]
21
> You can also follow [this fine-tuning guide](https://github.com/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb) on a mono-speaker dataset example.
Yoach Lacombe's avatar
Yoach Lacombe committed
22

Yoach Lacombe's avatar
Yoach Lacombe committed
23
## 1. Architecture
Yoach Lacombe's avatar
Yoach Lacombe committed
24

Yoach Lacombe's avatar
Yoach Lacombe committed
25
At the moment, Parler-TTS architecture is almost a carbon copy of the [MusicGen architecture](https://huggingface.co/docs/transformers/v4.39.3/en/model_doc/musicgen#model-structure) and can be decomposed into three distinct stages:
Yoach Lacombe's avatar
Yoach Lacombe committed
26
27
1. Text encoder: maps the text descriptions to a sequence of hidden-state representations. Parler-TTS uses a frozen text encoder initialised entirely from Flan-T5
2. Parler-TTS decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations
Yoach Lacombe's avatar
Yoach Lacombe committed
28
3. Audio codec: used to recover the audio waveform from the audio tokens predicted by the decoder. We use the [DAC model](https://github.com/descriptinc/descript-audio-codec) from Descript, although other codec models, such as [EnCodec](https://huggingface.co/facebook/encodec_48khz), can also be used.
Yoach Lacombe's avatar
Yoach Lacombe committed
29
30
31
32
33
34

Parler-TTS however introduces some small tweaks:
- The text **description** is passed through the text encoder and used in the cross-attention layers of the decoder.
- The text **prompt** is simply passed through an embedding layer and concatenated to the decoder input hidden states.
- The audio encoder used is [**DAC**](https://descript.notion.site/Descript-Audio-Codec-11389fce0ce2419891d6591a68f814d5) instead of [Encodec](https://github.com/facebookresearch/encodec), as it exhibits better quality.

Yoach Lacombe's avatar
Yoach Lacombe committed
35

Yoach Lacombe's avatar
Yoach Lacombe committed
36
## 2. Getting started
Yoach Lacombe's avatar
Yoach Lacombe committed
37
38

To get started, you need to follow a few steps:
Yoach Lacombe's avatar
Yoach Lacombe committed
39
40
41
42
1. Install the requirements.
2. Find or initialize the model you'll train on. 
3. Find and/or annotate the dataset you'll train your model on.

Yoach Lacombe's avatar
Yoach Lacombe committed
43
### Requirements
Yoach Lacombe's avatar
Yoach Lacombe committed
44
45
46
47
48
49
50
51
52
53

The Parler-TTS code is written in [PyTorch](https://pytorch.org) and [Accelerate](https://huggingface.co/docs/accelerate/index). It uses some additional requirements, like [wandb](https://wandb.ai/), especially for logging and evaluation.

To install the package for training, you need to clone the repository from source...

```bash
git clone https://github.com/huggingface/parler-tts.git
cd parler-tts
```

54
... And then install the requirements:
Yoach Lacombe's avatar
Yoach Lacombe committed
55
56
57
58
59

```bash
pip install -e .[train]
```

60
Optionally, you can create a wandb account and login to it by following [this guide](https://docs.wandb.ai/quickstart). [`wandb`](https://docs.wandb.ai/) allows for better tracking of the experiments metrics and losses.
Yoach Lacombe's avatar
Yoach Lacombe committed
61

Yoach Lacombe's avatar
Yoach Lacombe committed
62
You also have the option to configure Accelerate by running the following command. Note that you should set the number of GPUs you wish to use for training, and also the data type (dtype) to your preferred dtype for training/inference (e.g. `bfloat16` on A100 GPUs, `float16` on V100 GPUs, etc.):
Yoach Lacombe's avatar
Yoach Lacombe committed
63
64
65
66
67
68
69
70
71
72
73
74

```bash
accelerate config
```

Lastly, you can link you Hugging Face account so that you can push model repositories on the Hub. This will allow you to save your trained models on the Hub so that you can share them with the community. Run the command:

```bash
git config --global credential.helper store
huggingface-cli login
```
And then enter an authentication token from https://huggingface.co/settings/tokens. Create a new token if you do not have one already. You should make sure that this token has "write" privileges.
Yoach Lacombe's avatar
Yoach Lacombe committed
75

Yoach Lacombe's avatar
Yoach Lacombe committed
76
### Initialize a model from scratch or use a pre-trained one.
Yoach Lacombe's avatar
Yoach Lacombe committed
77
78
79

Depending on your compute resources and your dataset, you need to choose between fine-tuning a pre-trained model and training a new model from scratch.

Yoach Lacombe's avatar
Yoach Lacombe committed
80
In that sense, we released a 880M checkpoint trained on 45K hours of annotated data under the repository id: [`parler-tts/parler-tts-mini-v1`](https://huggingface.co/parler-tts/parler-tts-mini-v1), that you can fine-tune for your own use-case.
Yoach Lacombe's avatar
Yoach Lacombe committed
81
82
83
84
85
86
87

You can also train you own model from scratch. You can find [here](/helpers/model_init_scripts/) examples on how to initialize a model from scratch. For example, you can initialize a dummy model with:

```sh
python helpers/model_init_scripts/init_dummy_model.py ./parler-tts-untrained-dummy --text_model "google-t5/t5-small" --audio_model "parler-tts/dac_44khZ_8kbps"
```

Yoach Lacombe's avatar
Yoach Lacombe committed
88
In the rest of this guide, and to reproduce the Parler-TTS Mini v1 training recipe, we'll use a 880M parameters model that we'll initialize with:
Yoach Lacombe's avatar
Yoach Lacombe committed
89
90

```sh
Yoach Lacombe's avatar
Yoach Lacombe committed
91
python helpers/model_init_scripts/init_model_600M.py ./parler-tts-untrained-600M --text_model "google/flan-t5-large" --audio_model "parler-tts/dac_44khZ_8kbps"
Yoach Lacombe's avatar
Yoach Lacombe committed
92
93
```

Yoach Lacombe's avatar
Yoach Lacombe committed
94

Yoach Lacombe's avatar
Yoach Lacombe committed
95
### Create or find datasets
Yoach Lacombe's avatar
Yoach Lacombe committed
96

Yoach Lacombe's avatar
Yoach Lacombe committed
97
98
99
100
101
102
103
To train your own Parler-TTS, you need datasets with 3 main features:
- speech data
- text transcription of the speech data
- conditionning text description - that you can create using [Data-Speech](https://github.com/huggingface/dataspeech), a library that allows you to annotate the speaker and utterance characteristics with natural language description.

Note that we made the choice to use description of the main speech characteristics (speaker pitch, speaking rate, level of noise, etc.) but that you are free to use any handmade or generated text description that makes sense.

Yoach Lacombe's avatar
Yoach Lacombe committed
104
105
106
To train Parler-TTS Mini v1, we used:
* A [filtered version](https://huggingface.co/datasets/parler-tts/libritts_r_filtered) of [LibriTTS-R dataset](https://huggingface.co/datasets/blabble-io/libritts_r), a 1K hours high-quality speech dataset.
* The [English subset](https://huggingface.co/datasets/parler-tts/mls_eng) of [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech).
Yoach Lacombe's avatar
Yoach Lacombe committed
107

Yoach Lacombe's avatar
Yoach Lacombe committed
108
Both datasets have been annotated using the [Data-Speech](https://github.com/huggingface/dataspeech) recipe, respectively [here](https://huggingface.co/datasets/parler-tts/libritts-r-filtered-speaker-descriptions) and [here](https://huggingface.co/datasets/parler-tts/mls-eng-speaker-descriptions).
Yoach Lacombe's avatar
Yoach Lacombe committed
109
110


Yoach Lacombe's avatar
Yoach Lacombe committed
111
## 3. Training
Yoach Lacombe's avatar
Yoach Lacombe committed
112
113
114
115
116
117

The script [`run_parler_tts_training.py`](/training/run_parler_tts_training.py) is an end-to-end script that:
1. load dataset(s) and merge them to the annotation dataset(s) if necessary
2. pre-compute audio tokens
3. train Parler-TTS

Yoach Lacombe's avatar
Yoach Lacombe committed
118
To train Parler-TTS Mini v1, we roughly used:
Yoach Lacombe's avatar
Yoach Lacombe committed
119
120
121

```sh
accelerate launch ./training/run_parler_tts_training.py \
122
    --model_name_or_path "./parler-tts-untrained-600M/parler-tts-untrained-600M/" \
Yoach Lacombe's avatar
Yoach Lacombe committed
123
    --feature_extractor_name "parler-tts/dac_44khZ_8kbps" \
Yoach Lacombe's avatar
Yoach Lacombe committed
124
125
    --description_tokenizer_name "google/flan-t5-large" \
    --prompt_tokenizer_name "google/flan-t5-large" \
Yoach Lacombe's avatar
Yoach Lacombe committed
126
127
    --report_to "wandb" \
    --overwrite_output_dir true \
Yoach Lacombe's avatar
Yoach Lacombe committed
128
129
    --train_dataset_name "parler-tts/libritts_r_filtered+parler-tts/libritts_r_filtered+parler-tts/libritts_r_filtered+parler-tts/mls_eng" \
    --train_metadata_dataset_name "parler-tts/libritts-r-filtered-speaker-descriptions+parler-tts/libritts-r-filtered-speaker-descriptions+parler-tts/libritts-r-filtered-speaker-descriptions+parler-tts/mls-eng-speaker-descriptions" \
Yoach Lacombe's avatar
Yoach Lacombe committed
130
131
    --train_dataset_config_name "clean+clean+other+default" \
    --train_split_name "train.clean.360+train.clean.100+train.other.500+train" \
Yoach Lacombe's avatar
Yoach Lacombe committed
132
133
    --eval_dataset_name "parler-tts/libritts_r_filtered+parler-tts/mls_eng" \
    --eval_metadata_dataset_name "parler-tts/libritts-r-filtered-speaker-descriptions+parler-tts/mls-eng-speaker-descriptions" \
Yoach Lacombe's avatar
Yoach Lacombe committed
134
135
    --eval_dataset_config_name "other+default" \
    --eval_split_name "test.other+test" \
Yoach Lacombe's avatar
Yoach Lacombe committed
136
137
138
    --target_audio_column_name "audio" \
    --description_column_name "text_description" \
    --prompt_column_name "text" \
Yoach Lacombe's avatar
Yoach Lacombe committed
139
    --max_duration_in_seconds 30 \
Yoach Lacombe's avatar
Yoach Lacombe committed
140
    --min_duration_in_seconds 2.0 \
Yoach Lacombe's avatar
Yoach Lacombe committed
141
    --max_text_length 600 \
Yoach Lacombe's avatar
Yoach Lacombe committed
142
143
144
145
    --add_audio_samples_to_wandb true \
    --id_column_name "id" \
    --preprocessing_num_workers 8 \
    --do_train true \
Yoach Lacombe's avatar
Yoach Lacombe committed
146
147
    --num_train_epochs 4 \
    --gradient_accumulation_steps 6 \
Yoach Lacombe's avatar
Yoach Lacombe committed
148
    --gradient_checkpointing false \
Yoach Lacombe's avatar
Yoach Lacombe committed
149
    --per_device_train_batch_size 4 \
Yoach Lacombe's avatar
Yoach Lacombe committed
150
    --learning_rate 0.00095 \
Yoach Lacombe's avatar
Yoach Lacombe committed
151
152
153
    --adam_beta1 0.9 \
    --adam_beta2 0.99 \
    --weight_decay 0.01 \
Yoach Lacombe's avatar
Yoach Lacombe committed
154
155
156
    --lr_scheduler_type "constant_with_warmup" \
    --warmup_steps 20000 \
    --logging_steps 1000 \
Yoach Lacombe's avatar
Yoach Lacombe committed
157
158
159
160
161
    --freeze_text_encoder true \
    --do_eval true \
    --predict_with_generate true \
    --include_inputs_for_metrics true \
    --evaluation_strategy steps \
Yoach Lacombe's avatar
Yoach Lacombe committed
162
163
    --eval_steps 10000 \
    --save_steps 10000 \
Yoach Lacombe's avatar
Yoach Lacombe committed
164
165
    --per_device_eval_batch_size 4 \
    --audio_encoder_per_device_batch_size 24 \
Yoach Lacombe's avatar
Yoach Lacombe committed
166
167
168
169
170
    --dtype "bfloat16" \
    --seed 456 \
    --output_dir "./output_dir_training/" \
    --temporary_save_to_disk "./audio_code_tmp/" \
    --save_to_disk "./tmp_dataset_audio/" \
Yoach Lacombe's avatar
Yoach Lacombe committed
171
172
    --max_eval_samples 96 \
    --dataloader_num_workers 8 \
Yoach Lacombe's avatar
Yoach Lacombe committed
173
174
    --group_by_length true \
    --attn_implementation "sdpa"
Yoach Lacombe's avatar
Yoach Lacombe committed
175
176
```

Yoach Lacombe's avatar
Yoach Lacombe committed
177
178
In particular, note how multiple training datasets, metadataset, configurations and splits can be loaded by separating the dataset arguments by + symbols:
```sh
Yoach Lacombe's avatar
Yoach Lacombe committed
179
180
    "train_dataset_name": "parler-tts/libritts_r_filtered+parler-tts/libritts_r_filtered+parler-tts/libritts_r_filtered+parler-tts/mls_eng",
    "train_metadata_dataset_name": "parler-tts/libritts-r-filtered-speaker-descriptions+parler-tts/libritts-r-filtered-speaker-descriptions+parler-tts/libritts-r-filtered-speaker-descriptions+parler-tts/mls-eng-speaker-descriptions",
Yoach Lacombe's avatar
Yoach Lacombe committed
181
182
183
    "train_dataset_config_name": "clean+clean+other+default",
    "train_split_name": "train.clean.360+train.clean.100+train.other.500+train",
```
Yoach Lacombe's avatar
Yoach Lacombe committed
184
185


Yoach Lacombe's avatar
Yoach Lacombe committed
186
Additionally, you can also write a JSON config file. Here, [starting_point_v1.json](helpers/training_configs/starting_point_v1.json) contains the exact same hyper-parameters than above and can be launched like that:
Yoach Lacombe's avatar
Yoach Lacombe committed
187
```sh
Yoach Lacombe's avatar
Yoach Lacombe committed
188
accelerate launch ./training/run_parler_tts_training.py ./helpers/training_configs/starting_point_v1.json
Yoach Lacombe's avatar
Yoach Lacombe committed
189
190
```

Yoach Lacombe's avatar
Yoach Lacombe committed
191
Training logs will be reported to wandb, provided that you passed `--report_to "wandb"` to the arguments.
Yoach Lacombe's avatar
Yoach Lacombe committed
192

Yoach Lacombe's avatar
Yoach Lacombe committed
193
> [!TIP]
Yoach Lacombe's avatar
Yoach Lacombe committed
194
> Starting training a new model from scratch can easily be overwhelming, so here's what training looked like for v1: [logs](https://api.wandb.ai/links/ylacombe/j7g8isjn)
Yoach Lacombe's avatar
Yoach Lacombe committed
195

Yoach Lacombe's avatar
Yoach Lacombe committed
196
Scaling to multiple GPUs using [distributed data parallelism (DDP)](https://pytorch.org/tutorials/beginner/ddp_series_theory.html) is trivial: simply run `accelerate config` and select the multi-GPU option, specifying the IDs of the GPUs you wish to use. The above script can then be run using DDP with no code changes. In our case, we used 4 nodes of 8 H100 80GB to train Parler-TTS Mini for around 1.5 days.
Yoach Lacombe's avatar
Yoach Lacombe committed
197
198


Yoach Lacombe's avatar
Yoach Lacombe committed
199
There are a few other noteworthy arguments:
Yoach Lacombe's avatar
Yoach Lacombe committed
200
1. `train_metadata_dataset_name` and `eval_metadata_dataset_name` specify, if necessary, the names of the dataset(s) that contain(s) the conditionning text descriptions. For example, this [dataset resulting from the Data-Speech annotation process](https://huggingface.co/datasets/parler-tts/libritts-r-filtered-speaker-descriptions) is saved without the audio column, as it's costly to write and push audio data, so it needs to be concatenated back to the original LibriTTS-R dataset.
Yoach Lacombe's avatar
Yoach Lacombe committed
201
202
203
2. As noted above, the script pre-computes audio tokens as computing audio codes is costly and only needs to be done once, since we're freezing the audio encoder. `audio_encoder_per_device_batch_size` is used to precise the per devie batch size for this pre-processing step.
3. Additionnally, when scaling up the training data and iterating on the hyper-parameters or the model architecture, we might want to avoid recomputing the audio tokens at each training run. That's why we introduced two additional parameters, `save_to_disk` and `temporary_save_to_disk` that serves as temporary buffers to save intermediary datasets. Note that processed data is made of text and audio tokens which are much more memory efficient, so the additional required space is negligible.
4. `predict_with_generate` and `add_audio_samples_to_wandb` are required to store generated audios and to compute WER and CLAP similarity.
Yoach Lacombe's avatar
Yoach Lacombe committed
204
5. `freeze_text_encoder`: which allows to freeze the text encoder, to save compute resources.
Yoach Lacombe's avatar
Yoach Lacombe committed
205
206
207
208

And finally, two additional comments:
1. `lr_scheduler_stype`: defines the learning rate schedule, one of `constant_with_warmup` or `cosine`. When experimenting with a training set-up or training for very few epochs, using `constant_with_warmup` is typically beneficial, since the learning rate remains high over the short training run. When performing longer training runs, using a `cosine` schedule shoud give better results.
2. `dtype`: data type (dtype) in which the model computation should be performed. Note that this only controls the dtype of the computations (forward and backward pass), and not the dtype of the parameters or optimiser states.
Yoach Lacombe's avatar
Yoach Lacombe committed
209

Yoach Lacombe's avatar
Yoach Lacombe committed
210
211
> [!TIP]
> Fine-tuning is as easy as modifying `model_name_or_path` to a pre-trained model.
Yoach Lacombe's avatar
Yoach Lacombe committed
212
> For example: `--model_name_or_path parler-tts/parler-tts-mini-v1`.