README.md 8.85 KB
Newer Older
Yoach Lacombe's avatar
Yoach Lacombe committed
1
# Parler-TTS
sanchit-gandhi's avatar
setup  
sanchit-gandhi committed
2

Yoach Lacombe's avatar
Yoach Lacombe committed
3
Parler-TTS is a lightweight text-to-speech (TTS) model that can generate high-quality, natural sounding speech in the style of a given speaker (gender, pitch, speaking style, etc). It is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Yoach Lacombe's avatar
Yoach Lacombe committed
4

5
6
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.

Yoach Lacombe's avatar
Yoach Lacombe committed
7
This repository contains the inference and training code for Parler-TTS. It is designed to accompany the [Data-Speech](https://github.com/huggingface/dataspeech) repository for dataset annotation.
Yoach Lacombe's avatar
Yoach Lacombe committed
8

Yoach Lacombe's avatar
Yoach Lacombe committed
9
> [!IMPORTANT]
Yoach Lacombe's avatar
Yoach Lacombe committed
10
11
12
13
14
15
16
> **08/08/2024:** We are proud to release two new Parler-TTS checkpoints:
> 1. [Parler-TTS Mini](https://huggingface.co/parler-tts/parler-tts-mini-v1), an 880M parameter model.
> 2. [Parler-TTS Large](https://huggingface.co/parler-tts/parler-tts-large-v1), a 2.3B parameter model.
>
> These checkpoints have been trained on 45k hours of audiobook data.
>
> In addition, the code is optimized for much faster generation: we've added SDPA and Flash Attention 2 compatibility, as well as the ability to compile the model.
Yoach Lacombe's avatar
Yoach Lacombe committed
17
18
19
20

## 📖 Quick Index
* [Installation](#installation)
* [Usage](#usage)
Yoach Lacombe's avatar
Yoach Lacombe committed
21
22
  - [🎲 Using a random voice](#-random-voice)
  - [🎯 Using a specific speaker](#-using-a-specific-speaker)
Yoach Lacombe's avatar
Yoach Lacombe committed
23
* [Training](#training)
Yoach Lacombe's avatar
Yoach Lacombe committed
24
* [Demo](https://huggingface.co/spaces/parler-tts/parler_tts)
Yoach Lacombe's avatar
Yoach Lacombe committed
25
* [Model weights and datasets](https://huggingface.co/parler-tts)
Yoach Lacombe's avatar
Yoach Lacombe committed
26
* [Optimizing inference](#-optimizing-inference-speed)
Yoach Lacombe's avatar
Yoach Lacombe committed
27

Sanchit Gandhi's avatar
Sanchit Gandhi committed
28
29
30
31
32
33
34
35
## Installation

Parler-TTS has light-weight dependencies and can be installed in one line:

```sh
pip install git+https://github.com/huggingface/parler-tts.git
```

bghira's avatar
bghira committed
36
37
38
39
40
41
Apple Silicon users will need to run a follow-up command to make use the nightly PyTorch (2.4) build for bfloat16 support:

```sh
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
```

Yoach Lacombe's avatar
Yoach Lacombe committed
42
## Usage
Yoach Lacombe's avatar
Yoach Lacombe committed
43
44

> [!TIP]
Yoach Lacombe's avatar
Yoach Lacombe committed
45
> You can directly try it out in an interactive demo [here](https://huggingface.co/spaces/parler-tts/parler_tts)!
Yoach Lacombe's avatar
Yoach Lacombe committed
46

Yoach Lacombe's avatar
Yoach Lacombe committed
47
48
49
50
51
52
53
54
55
56
Using Parler-TTS is as simple as "bonjour". Simply install the library once:

```sh
pip install git+https://github.com/huggingface/parler-tts.git
```

### 🎲 Random voice


**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
Yoach Lacombe's avatar
Yoach Lacombe committed
57
58

```py
Yoach Lacombe's avatar
Yoach Lacombe committed
59
import torch
Yoach Lacombe's avatar
Yoach Lacombe committed
60
from parler_tts import ParlerTTSForConditionalGeneration
61
from transformers import AutoTokenizer
Yoach Lacombe's avatar
Yoach Lacombe committed
62
63
import soundfile as sf

Yoach Lacombe's avatar
Yoach Lacombe committed
64
device = "cuda:0" if torch.cuda.is_available() else "cpu"
Yoach Lacombe's avatar
Yoach Lacombe committed
65

Yoach Lacombe's avatar
Yoach Lacombe committed
66
67
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
68

Yoach Lacombe's avatar
Yoach Lacombe committed
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."

input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)

generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```

### 🎯 Using a specific speaker

To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).

To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`

```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf

device = "cuda:0" if torch.cuda.is_available() else "cpu"

model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
Yoach Lacombe's avatar
Yoach Lacombe committed
96
97

prompt = "Hey, how are you doing today?"
Yoach Lacombe's avatar
Yoach Lacombe committed
98
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
Yoach Lacombe's avatar
Yoach Lacombe committed
99

Yoach Lacombe's avatar
Yoach Lacombe committed
100
101
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
Yoach Lacombe's avatar
Yoach Lacombe committed
102

Yoach Lacombe's avatar
Yoach Lacombe committed
103
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
Yoach Lacombe's avatar
Yoach Lacombe committed
104
105
106
107
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```

Yoach Lacombe's avatar
Yoach Lacombe committed
108
109
110
111
112
113
114
115
116
117
**Tips**:
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt

### ✨ Optimizing Inference Speed

We've set up an [inference guide](INFERENCE.md) to make generation faster. Think SDPA, torch.compile and streaming!


Yoach Lacombe's avatar
Yoach Lacombe committed
118
119
https://github.com/huggingface/parler-tts/assets/52246514/251e2488-fe6e-42c1-81cd-814c5b7795b0

Yoach Lacombe's avatar
Yoach Lacombe committed
120
## Training
Yoach Lacombe's avatar
Yoach Lacombe committed
121

122
<a target="_blank" href="https://github.com/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb"> 
Yoach Lacombe's avatar
Yoach Lacombe committed
123
124
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> 
</a>
sanchit-gandhi's avatar
sanchit-gandhi committed
125

Yoach Lacombe's avatar
Yoach Lacombe committed
126
127
128
129
130
131
The [training folder](/training/) contains all the information to train or fine-tune your own Parler-TTS model. It consists of:
- [1. An introduction to the Parler-TTS architecture](/training/README.md#1-architecture)
- [2. The first steps to get started](/training/README.md#2-getting-started)
- [3. A training guide](/training/README.md#3-training)

> [!IMPORTANT]
Yoach Lacombe's avatar
Yoach Lacombe committed
132
> **TL;DR:** After having followed the [installation steps](/training/README.md#requirements), you can reproduce the Parler-TTS Mini v1 training recipe with the following command line:
Yoach Lacombe's avatar
Yoach Lacombe committed
133
134

```sh
Yoach Lacombe's avatar
Yoach Lacombe committed
135
accelerate launch ./training/run_parler_tts_training.py ./helpers/training_configs/starting_point_v1.json
Yoach Lacombe's avatar
Yoach Lacombe committed
136
```
sanchit-gandhi's avatar
sanchit-gandhi committed
137

Yoach Lacombe's avatar
Yoach Lacombe committed
138
> [!IMPORTANT]
139
> You can also follow [this fine-tuning guide](https://github.com/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb) on a mono-speaker dataset example.
Yoach Lacombe's avatar
Yoach Lacombe committed
140

Yoach Lacombe's avatar
Yoach Lacombe committed
141
## Acknowledgements
sanchit-gandhi's avatar
sanchit-gandhi committed
142

Yoach Lacombe's avatar
Yoach Lacombe committed
143
This library builds on top of a number of open-source giants, to whom we'd like to extend our warmest thanks for providing these tools!
sanchit-gandhi's avatar
sanchit-gandhi committed
144

Yoach Lacombe's avatar
Yoach Lacombe committed
145
146
Special thanks to:
- Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively, for publishing such a promising and clear research paper: [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://arxiv.org/abs/2402.01912).
Yoach Lacombe's avatar
Yoach Lacombe committed
147
- the many libraries used, namely [🤗 datasets](https://huggingface.co/docs/datasets/v2.17.0/en/index), [🤗 accelerate](https://huggingface.co/docs/accelerate/en/index), [jiwer](https://github.com/jitsi/jiwer), [wandb](https://wandb.ai/), and [🤗 transformers](https://huggingface.co/docs/transformers/index).
148
149
- Descript for the [DAC codec model](https://github.com/descriptinc/descript-audio-codec)
- Hugging Face 🤗 for providing compute resources and time to explore!
sanchit-gandhi's avatar
setup  
sanchit-gandhi committed
150

Yoach Lacombe's avatar
Yoach Lacombe committed
151

Yoach Lacombe's avatar
Yoach Lacombe committed
152
## Citation
Yoach Lacombe's avatar
Yoach Lacombe committed
153

154
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
Yoach Lacombe's avatar
Yoach Lacombe committed
155

Yoach Lacombe's avatar
Yoach Lacombe committed
156
157
158
159
160
161
162
```
@misc{lacombe-etal-2024-parler-tts,
  author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
  title = {Parler-TTS},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
Yoach Lacombe's avatar
Yoach Lacombe committed
163
  howpublished = {\url{https://github.com/huggingface/parler-tts}}
Yoach Lacombe's avatar
Yoach Lacombe committed
164
165
}
```
Yoach Lacombe's avatar
Yoach Lacombe committed
166
167
168
169
170
171
172
173
174
175
176

```
@misc{lyth2024natural,
      title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
      author={Dan Lyth and Simon King},
      year={2024},
      eprint={2402.01912},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}
```
Yoach Lacombe's avatar
Yoach Lacombe committed
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198

## Contribution

Contributions are welcome, as the project offers many possibilities for improvement and exploration.

Namely, we're looking at ways to improve both quality and speed:
- Datasets:
    - Train on more data
    - Add more features such as accents
- Training:
    - Add PEFT compatibility to do Lora fine-tuning.
    - Add possibility to train without description column.
    - Add notebook training.
    - Explore multilingual training.
    - Explore mono-speaker finetuning.
    - Explore more architectures.
- Optimization:
    - Compilation and static cache
    - Support to FA2 and SDPA
- Evaluation:
    - Add more evaluation metrics