README.md 6.43 KB
Newer Older
Yoach Lacombe's avatar
Yoach Lacombe committed
1
# Parler-TTS
sanchit-gandhi's avatar
setup  
sanchit-gandhi committed
2

Yoach Lacombe's avatar
Yoach Lacombe committed
3
Parler-TTS is a lightweight text-to-speech (TTS) model that can generate high-quality, natural sounding speech in the style of a given speaker (gender, pitch, speaking style, etc). It is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Yoach Lacombe's avatar
Yoach Lacombe committed
4

5
6
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.

Yoach Lacombe's avatar
Yoach Lacombe committed
7
This repository contains the inference and training code for Parler-TTS. It is designed to accompany the [Data-Speech](https://github.com/huggingface/dataspeech) repository for dataset annotation.
Yoach Lacombe's avatar
Yoach Lacombe committed
8

Yoach Lacombe's avatar
Yoach Lacombe committed
9
> [!IMPORTANT]
10
> We're proud to release [Parler-TTS Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1), our first 600M parameter model, trained on 10.5K hours of audio data.
Yoach Lacombe's avatar
Yoach Lacombe committed
11
> In the coming weeks, we'll be working on scaling up to 50k hours of data, in preparation for the v1 model.
Yoach Lacombe's avatar
Yoach Lacombe committed
12
13
14
15
16
17
18
19

## 📖 Quick Index
* [Installation](#installation)
* [Usage](#usage)
* [Training](#training)
* [Demo](https://huggingface.co/spaces/parler-tts/parler_tts_mini)
* [Model weights and datasets](https://huggingface.co/parler-tts)

Sanchit Gandhi's avatar
Sanchit Gandhi committed
20
21
22
23
24
25
26
27
## Installation

Parler-TTS has light-weight dependencies and can be installed in one line:

```sh
pip install git+https://github.com/huggingface/parler-tts.git
```

bghira's avatar
bghira committed
28
29
30
31
32
33
Apple Silicon users will need to run a follow-up command to make use the nightly PyTorch (2.4) build for bfloat16 support:

```sh
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
```

Yoach Lacombe's avatar
Yoach Lacombe committed
34
## Usage
Yoach Lacombe's avatar
Yoach Lacombe committed
35
36

> [!TIP]
Yoach Lacombe's avatar
Yoach Lacombe committed
37
> You can directly try it out in an interactive demo [here](https://huggingface.co/spaces/parler-tts/parler_tts_mini)!
Yoach Lacombe's avatar
Yoach Lacombe committed
38
39
40
41
42

Using Parler-TTS is as simple as "bonjour". Simply use the following inference snippet.

```py
from parler_tts import ParlerTTSForConditionalGeneration
43
from transformers import AutoTokenizer
Yoach Lacombe's avatar
Yoach Lacombe committed
44
import soundfile as sf
Yoach Lacombe's avatar
Yoach Lacombe committed
45
import torch
Yoach Lacombe's avatar
Yoach Lacombe committed
46

bghira's avatar
bghira committed
47
48
49
50
51
52
53
54
device = "cpu"
if torch.cuda.is_available():
    device = "cuda:0"
if torch.backends.mps.is_available():
    device = "mps"
if torch.xpu.is_available():
    device = "xpu"
torch_dtype = torch.float16 if device != "cpu" else torch.float32
Yoach Lacombe's avatar
Yoach Lacombe committed
55

bghira's avatar
bghira committed
56
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler_tts_mini_v0.1").to(device, dtype=torch_dtype)
57
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler_tts_mini_v0.1")
Yoach Lacombe's avatar
Yoach Lacombe committed
58
59
60
61

prompt = "Hey, how are you doing today?"
description = "A female speaker with a slightly low-pitched voice delivers her words quite expressively, in a very confined sounding environment with clear audio quality. She speaks very fast."

Yoach Lacombe's avatar
Yoach Lacombe committed
62
63
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
Yoach Lacombe's avatar
Yoach Lacombe committed
64

bghira's avatar
bghira committed
65
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids).to(torch.float32)
Yoach Lacombe's avatar
Yoach Lacombe committed
66
67
68
69
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```

Yoach Lacombe's avatar
Yoach Lacombe committed
70
71
https://github.com/huggingface/parler-tts/assets/52246514/251e2488-fe6e-42c1-81cd-814c5b7795b0

Yoach Lacombe's avatar
Yoach Lacombe committed
72
## Training
Yoach Lacombe's avatar
Yoach Lacombe committed
73
74
75
<a target="_blank" href="https://colab.research.google.com/github/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_on_a_single_speaker_dataset.ipynb"> 
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> 
</a>
sanchit-gandhi's avatar
sanchit-gandhi committed
76

Yoach Lacombe's avatar
Yoach Lacombe committed
77
78
79
80
81
82
The [training folder](/training/) contains all the information to train or fine-tune your own Parler-TTS model. It consists of:
- [1. An introduction to the Parler-TTS architecture](/training/README.md#1-architecture)
- [2. The first steps to get started](/training/README.md#2-getting-started)
- [3. A training guide](/training/README.md#3-training)

> [!IMPORTANT]
83
> **TL;DR:** After having followed the [installation steps](/training/README.md#requirements), you can reproduce the Parler-TTS Mini v0.1 training recipe with the following command line:
Yoach Lacombe's avatar
Yoach Lacombe committed
84
85
86
87

```sh
accelerate launch ./training/run_parler_tts_training.py ./helpers/training_configs/starting_point_0.01.json
```
sanchit-gandhi's avatar
sanchit-gandhi committed
88

Yoach Lacombe's avatar
Yoach Lacombe committed
89
## Acknowledgements
sanchit-gandhi's avatar
sanchit-gandhi committed
90

Yoach Lacombe's avatar
Yoach Lacombe committed
91
This library builds on top of a number of open-source giants, to whom we'd like to extend our warmest thanks for providing these tools!
sanchit-gandhi's avatar
sanchit-gandhi committed
92

Yoach Lacombe's avatar
Yoach Lacombe committed
93
94
Special thanks to:
- Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively, for publishing such a promising and clear research paper: [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://arxiv.org/abs/2402.01912).
Yoach Lacombe's avatar
Yoach Lacombe committed
95
- the many libraries used, namely [🤗 datasets](https://huggingface.co/docs/datasets/v2.17.0/en/index), [🤗 accelerate](https://huggingface.co/docs/accelerate/en/index), [jiwer](https://github.com/jitsi/jiwer), [wandb](https://wandb.ai/), and [🤗 transformers](https://huggingface.co/docs/transformers/index).
96
97
- Descript for the [DAC codec model](https://github.com/descriptinc/descript-audio-codec)
- Hugging Face 🤗 for providing compute resources and time to explore!
sanchit-gandhi's avatar
setup  
sanchit-gandhi committed
98

Yoach Lacombe's avatar
Yoach Lacombe committed
99

Yoach Lacombe's avatar
Yoach Lacombe committed
100
## Citation
Yoach Lacombe's avatar
Yoach Lacombe committed
101

102
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
Yoach Lacombe's avatar
Yoach Lacombe committed
103

Yoach Lacombe's avatar
Yoach Lacombe committed
104
105
106
107
108
109
110
```
@misc{lacombe-etal-2024-parler-tts,
  author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
  title = {Parler-TTS},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
Yoach Lacombe's avatar
Yoach Lacombe committed
111
  howpublished = {\url{https://github.com/huggingface/parler-tts}}
Yoach Lacombe's avatar
Yoach Lacombe committed
112
113
}
```
Yoach Lacombe's avatar
Yoach Lacombe committed
114
115
116
117
118
119
120
121
122
123
124

```
@misc{lyth2024natural,
      title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
      author={Dan Lyth and Simon King},
      year={2024},
      eprint={2402.01912},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}
```
Yoach Lacombe's avatar
Yoach Lacombe committed
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146

## Contribution

Contributions are welcome, as the project offers many possibilities for improvement and exploration.

Namely, we're looking at ways to improve both quality and speed:
- Datasets:
    - Train on more data
    - Add more features such as accents
- Training:
    - Add PEFT compatibility to do Lora fine-tuning.
    - Add possibility to train without description column.
    - Add notebook training.
    - Explore multilingual training.
    - Explore mono-speaker finetuning.
    - Explore more architectures.
- Optimization:
    - Compilation and static cache
    - Support to FA2 and SDPA
- Evaluation:
    - Add more evaluation metrics