Commit b4af4e0c authored by luopl's avatar luopl
Browse files

"Initial commit"

parents
# Initially taken from Github's Python gitignore file
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# tests and logs
tests/fixtures/cached_*_text.txt
logs/
lightning_logs/
lang_code_data/
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# vscode
.vs
.vscode
# Pycharm
.idea
# TF code
tensorflow_code
# Models
proc_data
# examples
runs
/runs_old
/wandb
/examples/runs
/examples/**/*.args
/examples/rag/sweep
# data
/data
serialization_dir
# emacs
*.*~
debug.env
# vim
.*.swp
#ctags
tags
# pre-commit
.pre-commit*
# .lock
*.lock
# DS_Store (MacOS)
.DS_Store
# ruff
.ruff_cache
# our proj
/output/
/outputs/
/checkpoint/
/checkpoints/
exp
.gradio/
MIT License
Copyright (c) 2025 Microsoft
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# VibeVoice
## 论文
`VibeVoice Technical Report`
- https://arxiv.org/abs/2508.19205
## 模型结构
传统的文本转语音系统面临着几个长期难以克服的挑战:生成语音长度有限、多说话人支持不足、以及长语音中的音色漂移和语义断裂问题。 VibeVoice-1.5B 的出现,有效地解决了这些痛点。
- 超长语音生成:此前多数TTS模型只能合成60分钟以内的语音,并且在30分钟后通常会出现音质下降问题。VibeVoice-1.5B实现了单次生成90分钟高质量语音的能力,为有声书、播客等长内容制作打开了新天地。
- 多说话人支持:模型最多可模拟4位不同说话者的自然轮换对话,远超此前开源模型(如SesameAILabs-CSM、HiggsAudio-V2)最多支持2人的限制。
- 卓越的压缩效率:该模型对24kHz原始音频可实现3200倍的累计压缩率,其压缩效率是主流Encodec模型的80倍,同时仍能保持高保真语音效果。
<div align=center>
<img src="./Figures/arch.png"/>
</div>
## 算法原理
VibeVoice-1.5B的创新实现,得益于其多项前沿技术的结合:
- 双分词器协同工作:模型首创了声学(Acoustic)与语义(Semantic)双分词器架构。
- 声学分词器采用σ-VAE结构,负责保留声音特征并实现极致压缩,将24kHz原始音频压缩至3200分之一。
- 语义分词器则通过语音识别代理任务训练,确保对话的语义得以保留,有效解决了音色与语义不匹配的传统难题。
- 强大的基础模型:该模型基于1.5B参数的Qwen2.5语言模型,使其能够理解和处理复杂的文本上下文。
- 扩散解码器:在解码端,模型采用了1.23亿参数的扩散解码器,结合分类器自由引导和DPM-Solver算法,显著提升了音质与细节表现。
## 环境配置
### 硬件需求
DCU型号:K100_AI,节点数量:1台,卡数:1张。
`-v 路径``docker_name``imageID`根据实际情况修改
### Docker(方法一)
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.4.1-ubuntu22.04-dtk25.04.1-py3.10
docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash
cd /your_code_path/VibeVoice_pytorch
pip install -e .
pip install peft==0.17.0
apt update && apt install ffmpeg -y
```
### Dockerfile(方法二)
此处提供dockerfile的使用方法
```
docker build --no-cache -t VibeVoice:latest .
docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash
cd /your_code_path/VibeVoice_pytorch
pip install -e .
pip install peft==0.17.0
apt update && apt install ffmpeg -y
```
### Anaconda(方法三)
此处提供本地配置、编译的详细步骤,例如:
关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.sourcefind.cn/tool/)开发者社区下载安装。
```
DTK驱动:dtk25.04.1
python:python3.10
torch: 2.4.1+das.opt1.dtk25041
```
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应`
其它非深度学习库参照以下安装:
```
cd /your_code_path/VibeVoice_pytorch
pip install -e .
pip install numpy accelerate peft==0.17.0
apt update && apt install ffmpeg -y
```
## 数据集
暂无
## 训练
暂无
## 推理
- Usage 1: Launch Gradio demo
```
# 无法访问外网建议先添加HF镜像export HF_ENDPOINT=https://hf-mirror.com
# For 1.5B model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-1.5B --share
# For Large model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-Large --share
```
-Usage 2: Inference from files directly
```
# We provide some LLM generated example scripts under demo/text_examples/ for demo
# 1 speaker
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/1p_abs.txt --speaker_names Alice
# or more speakers
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/2p_music.txt --speaker_names Alice Frank
```
## result
- Graio demo
<div align=center>
<img src="./Figures/results.png"/>
</div>
- txt_path demo/text_examples/1p_abs.txt
![1p_abs_generated.wav](./Figures/1p_abs_generated.wav)
### 精度
DCU与GPU精度一致,推理框架:pytorch。
## 应用场景
### 算法类别
`语音合成`
### 热点应用行业
`广媒,影视,动漫,医疗,家居,教育`
## 预训练权重
| Model | Context Length | Generation Length | Weight |
|-------|----------------|----------|----------|
| VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) |
| VibeVoice-Large| 32K | ~45 min | [HF link](https://huggingface.co/microsoft/VibeVoice-Large) |
## 源码仓库及问题反馈
- https://developer.sourcefind.cn/codes/modelzoo/vibevoice_pytorch
## 参考资料
- https://github.com/microsoft/VibeVoice
<div align="center">
## 🎙️ VibeVoice: A Frontier Long Conversational Text-to-Speech Model
[![Project Page](https://img.shields.io/badge/Project-Page-blue?logo=microsoft)](https://microsoft.github.io/VibeVoice)
[![Hugging Face](https://img.shields.io/badge/HuggingFace-Collection-orange?logo=huggingface)](https://huggingface.co/collections/microsoft/vibevoice-68a2ef24a875c44be47b034f)
[![Technical Report](https://img.shields.io/badge/Technical-Report-red?logo=adobeacrobatreader)](https://arxiv.org/pdf/2508.19205)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb)
[![Live Playground](https://img.shields.io/badge/Live-Playground-green?logo=gradio)](https://aka.ms/VibeVoice-Demo)
[![Colab](https://img.shields.io/badge/Run-Colab-orange?logo=googlecolab)](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb)
</div>
<!-- <div align="center">
<img src="Figures/log.png" alt="VibeVoice Logo" width="200">
</div> -->
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="Figures/VibeVoice_logo_white.png">
<img src="Figures/VibeVoice_logo.png" alt="VibeVoice Logo" width="300">
</picture>
</div>
VibeVoice is a novel framework designed for generating **expressive**, **long-form**, **multi-speaker** conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking.
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a [next-token diffusion](https://arxiv.org/abs/2412.08635) framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models.
<p align="left">
<img src="Figures/MOS-preference.png" alt="MOS Preference Results" height="260px">
<img src="Figures/VibeVoice.jpg" alt="VibeVoice Overview" height="250px" style="margin-right: 10px;">
</p>
### 🔥 News
- **[2025-08-26] 🎉 We Open Source the [VibeVoice-Large](https://huggingface.co/microsoft/VibeVoice-Large) model weights!**
- **[2025-08-28] 🎉 We provide a [Colab](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb) script for easy access to our model. Due to GPU memory limitations, only VibeVoice-1.5B is supported.**
### 📋 TODO
- [ ] Merge models into official Hugging Face repository ([PR](https://github.com/huggingface/transformers/pull/40546))
- [ ] Release example training code and documentation
- [ ] VibePod: End-to-end solution that creates podcasts from documents, webpages, or even a simple topic.
### 🎵 Demo Examples
**Video Demo**
We produced this video with [Wan2.2](https://github.com/Wan-Video/Wan2.2). We sincerely appreciate the Wan-Video team for their great work.
**English**
<div align="center">
https://github.com/user-attachments/assets/0967027c-141e-4909-bec8-091558b1b784
</div>
**Chinese**
<div align="center">
https://github.com/user-attachments/assets/322280b7-3093-4c67-86e3-10be4746c88f
</div>
**Cross-Lingual**
<div align="center">
https://github.com/user-attachments/assets/838d8ad9-a201-4dde-bb45-8cd3f59ce722
</div>
**Spontaneous Singing**
<div align="center">
https://github.com/user-attachments/assets/6f27a8a5-0c60-4f57-87f3-7dea2e11c730
</div>
**Long Conversation with 4 people**
<div align="center">
https://github.com/user-attachments/assets/a357c4b6-9768-495c-a576-1618f6275727
</div>
For more examples, see the [Project Page](https://microsoft.github.io/VibeVoice).
Try it on [Colab](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb) or [Demo](https://aka.ms/VibeVoice-Demo).
## Models
| Model | Context Length | Generation Length | Weight |
|-------|----------------|----------|----------|
| VibeVoice-0.5B-Streaming | - | - | On the way |
| VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) |
| VibeVoice-Large| 32K | ~45 min | [HF link](https://huggingface.co/microsoft/VibeVoice-Large) |
## Installation
We recommend to use NVIDIA Deep Learning Container to manage the CUDA environment.
1. Launch docker
```bash
# NVIDIA PyTorch Container 24.07 / 24.10 / 24.12 verified.
# Later versions are also compatible.
sudo docker run --privileged --net=host --ipc=host --ulimit memlock=-1:-1 --ulimit stack=-1:-1 --gpus all --rm -it nvcr.io/nvidia/pytorch:24.07-py3
## If flash attention is not included in your docker environment, you need to install it manually
## Refer to https://github.com/Dao-AILab/flash-attention for installation instructions
# pip install flash-attn --no-build-isolation
```
2. Install from github
```bash
git clone https://github.com/microsoft/VibeVoice.git
cd VibeVoice/
pip install -e .
```
## Usages
### 🚨 Tips
We observed users may encounter occasional instability when synthesizing Chinese speech. We recommend:
- Using English punctuation even for Chinese text, preferably only commas and periods.
- Using the Large model variant, which is considerably more stable.
- If you found the generated voice speak too fast. Please try to chunk your text with multiple speaker turns with same speaker label.
We'd like to thank [PsiPi](https://huggingface.co/PsiPi) for sharing an interesting way for emotion control. Detials can be found via [discussion12](https://huggingface.co/microsoft/VibeVoice-1.5B/discussions/12).
### Usage 1: Launch Gradio demo
```bash
apt update && apt install ffmpeg -y # for demo
# For 1.5B model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-1.5B --share
# For Large model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-Large --share
```
### Usage 2: Inference from files directly
```bash
# We provide some LLM generated example scripts under demo/text_examples/ for demo
# 1 speaker
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/1p_abs.txt --speaker_names Alice
# or more speakers
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/2p_music.txt --speaker_names Alice Frank
```
## FAQ
#### Q1: Is this a pretrained model?
**A:** Yes, it's a pretrained model without any post-training or benchmark-specific optimizations. In a way, this makes VibeVoice very versatile and fun to use.
#### Q2: Randomly trigger Sounds / Music / BGM.
**A:** As you can see from our demo page, the background music or sounds are spontaneous. This means we can't directly control whether they are generated or not. The model is content-aware, and these sounds are triggered based on the input text and the chosen voice prompt.
Here are a few things we've noticed:
* If the voice prompt you use contains background music, the generated speech is more likely to have it as well. (The Large model is quite stable and effective at this—give it a try on the demo!)
* If the voice prompt is clean (no BGM), but the input text includes introductory words or phrases like "Welcome to," "Hello," or "However," background music might still appear.
* Speaker voice related, using "Alice" results in random BGM than others (fixed).
* In other scenarios, the Large model is more stable and has a lower probability of generating unexpected background music.
In fact, we intentionally decided not to denoise our training data because we think it's an interesting feature for BGM to show up at just the right moment. You can think of it as a little easter egg we left for you.
#### Q3: Text normalization?
**A:** We don't perform any text normalization during training or inference. Our philosophy is that a large language model should be able to handle complex user inputs on its own. However, due to the nature of the training data, you might still run into some corner cases.
#### Q4: Singing Capability.
**A:** Our training data **doesn't contain any music data**. The ability to sing is an emergent capability of the model (which is why it might sound off-key, even on a famous song like 'See You Again'). (The Large model is more likely to exhibit this than the 1.5B).
#### Q5: Some Chinese pronunciation errors.
**A:** The volume of Chinese data in our training set is significantly smaller than the English data. Additionally, certain special characters (e.g., Chinese quotation marks) may occasionally cause pronunciation issues.
#### Q6: Instability of cross-lingual transfer.
**A:** The model does exhibit strong cross-lingual transfer capabilities, including the preservation of accents, but its performance can be unstable. This is an emergent ability of the model that we have not specifically optimized. It's possible that a satisfactory result can be achieved through repeated sampling.
## Risks and limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release).
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
English and Chinese only: Transcripts in languages other than English or Chinese may result in unexpected audio outputs.
Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects.
Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations.
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
<!-- BEGIN MICROSOFT SECURITY.MD V1.0.0 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which
includes all source code repositories in our GitHub organizations.
**Please do not report security vulnerabilities through public GitHub issues.**
For security reporting information, locations, contact information, and policies,
please review the latest guidance for Microsoft repositories at
[https://aka.ms/SECURITY.md](https://aka.ms/SECURITY.md).
<!-- END MICROSOFT SECURITY.MD BLOCK -->
\ No newline at end of file
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# VibeVoice Colab — T4 Quickstart (1.5B)\n",
"\n",
"This notebook provides a quickstart guide to run VibeVoice on Colab with T4. The T4 GPU can only support the 1.5B model due to memory limitations. Please note that T4 can only use SDPA instead of flash_attention_2, which may result in unstable and lower audio quality. For the best TTS experience, we recommend trying the 7B model on a more powerful GPU.\n",
"\n",
"## Risks and Limitations\n",
"\n",
"While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release). Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content."
],
"metadata": {
"id": "WvIaUJD2y0yU"
},
"id": "WvIaUJD2y0yU"
},
{
"cell_type": "markdown",
"source": [
"## Step 1: Setup Environment"
],
"metadata": {
"id": "e8fTKYGx7DZk"
},
"id": "e8fTKYGx7DZk"
},
{
"cell_type": "code",
"source": [
"# Check for T4 GPU\n",
"import torch\n",
"if torch.cuda.is_available() and \"T4\" in torch.cuda.get_device_name(0):\n",
" print(\"✅ T4 GPU detected\")\n",
"else:\n",
" print(\"\"\"\n",
" ⚠️ WARNING: T4 GPU not detected\n",
"\n",
" The recommended runtime for this Colab notebook is \"T4 GPU\".\n",
"\n",
" To change the runtime type:\n",
"\n",
" 1. Click on \"Runtime\" in the top navigation menu\n",
" 2. Click on \"Change runtime type\"\n",
" 3. Select \"T4 GPU\"\n",
" 4. Click \"OK\" if a \"Disconnect and delete runtime\" window appears\n",
" 5. Click on \"Save\"\n",
"\n",
" \"\"\")\n",
"\n",
"# Clone the VibeVoice repository\n",
"![ -d /content/VibeVoice ] || git clone --quiet --branch main --depth 1 https://github.com/microsoft/VibeVoice.git /content/VibeVoice\n",
"print(\"✅ Cloned VibeVoice repository\")\n",
"\n",
"# Install project dependencies\n",
"!uv pip --quiet install --system -e /content/VibeVoice\n",
"print(\"✅ Installed dependencies\")\n",
"\n",
"# Download model (~3 minutes)\n",
"!HF_XET_HIGH_PERFORMANCE=1 hf download microsoft/VibeVoice-1.5B --quiet --local-dir /content/models/VibeVoice-1.5B > /dev/null\n",
"print(\"✅ Downloaded model: microsoft/VibeVoice-1.5B\")\n"
],
"metadata": {
"id": "4wxJ6QHM-ZOb"
},
"id": "4wxJ6QHM-ZOb",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Step 2: Create Transcript"
],
"metadata": {
"id": "pgKlV7153Ifi"
},
"id": "pgKlV7153Ifi"
},
{
"cell_type": "code",
"source": [
"%%writefile /content/my_transcript.txt\n",
"Speaker 1: Can I try VibeVoice with my own example?\n",
"Speaker 2: Of course! VibeVoice is open-source, built to benefit everyone - you're welcome to try it out.\n"
],
"metadata": {
"id": "Yc1N9EHswFxA"
},
"id": "Yc1N9EHswFxA",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Step 3: Generate Audio"
],
"metadata": {
"id": "MBCC6s-F6_hP"
},
"id": "MBCC6s-F6_hP"
},
{
"cell_type": "code",
"source": [
"# Run Python script to generate audio from transcript\n",
"!python /content/VibeVoice/demo/inference_from_file.py \\\n",
" --model_path /content/models/VibeVoice-1.5B \\\n",
" --txt_path /content/my_transcript.txt \\\n",
" --speaker_names Alice Frank\n",
"\n",
"# Display audio controls\n",
"from IPython.display import Audio\n",
"Audio(\"/content/outputs/my_transcript_generated.wav\")\n"
],
"metadata": {
"id": "dYWsLJ-n0Npm"
},
"id": "dYWsLJ-n0Npm",
"execution_count": null,
"outputs": []
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": [],
"machine_shape": "hm",
"name": "VibeVoice_Colab.ipynb",
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
"""
VibeVoice Gradio Demo - High-Quality Dialogue Generation Interface with Streaming Support
"""
import argparse
import json
import os
import sys
import tempfile
import time
from pathlib import Path
from typing import List, Dict, Any, Iterator
from datetime import datetime
import threading
import numpy as np
import gradio as gr
import librosa
import soundfile as sf
import torch
import os
import traceback
from vibevoice.modular.configuration_vibevoice import VibeVoiceConfig
from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference
from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor
from vibevoice.modular.streamer import AudioStreamer
from transformers.utils import logging
from transformers import set_seed
logging.set_verbosity_info()
logger = logging.get_logger(__name__)
class VibeVoiceDemo:
def __init__(self, model_path: str, device: str = "cuda", inference_steps: int = 5):
"""Initialize the VibeVoice demo with model loading."""
self.model_path = model_path
self.device = device
self.inference_steps = inference_steps
self.is_generating = False # Track generation state
self.stop_generation = False # Flag to stop generation
self.current_streamer = None # Track current audio streamer
self.load_model()
self.setup_voice_presets()
self.load_example_scripts() # Load example scripts
def load_model(self):
"""Load the VibeVoice model and processor."""
print(f"Loading processor & model from {self.model_path}")
# Load processor
self.processor = VibeVoiceProcessor.from_pretrained(
self.model_path,
)
# Load model
try:
self.model = VibeVoiceForConditionalGenerationInference.from_pretrained(
self.model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
attn_implementation='flash_attention_2' # flash_attention_2 is recommended
)
except Exception as e:
print(f"[ERROR] : {type(e).__name__}: {e}")
print(traceback.format_exc())
print("Error loading the model. Trying to use SDPA. However, note that only flash_attention_2 has been fully tested, and using SDPA may result in lower audio quality.")
self.model = VibeVoiceForConditionalGenerationInference.from_pretrained(
self.model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
attn_implementation='sdpa'
)
self.model.eval()
# Use SDE solver by default
self.model.model.noise_scheduler = self.model.model.noise_scheduler.from_config(
self.model.model.noise_scheduler.config,
algorithm_type='sde-dpmsolver++',
beta_schedule='squaredcos_cap_v2'
)
self.model.set_ddpm_inference_steps(num_steps=self.inference_steps)
if hasattr(self.model.model, 'language_model'):
print(f"Language model attention: {self.model.model.language_model.config._attn_implementation}")
def setup_voice_presets(self):
"""Setup voice presets by scanning the voices directory."""
voices_dir = os.path.join(os.path.dirname(__file__), "voices")
# Check if voices directory exists
if not os.path.exists(voices_dir):
print(f"Warning: Voices directory not found at {voices_dir}")
self.voice_presets = {}
self.available_voices = {}
return
# Scan for all WAV files in the voices directory
self.voice_presets = {}
# Get all .wav files in the voices directory
wav_files = [f for f in os.listdir(voices_dir)
if f.lower().endswith(('.wav', '.mp3', '.flac', '.ogg', '.m4a', '.aac')) and os.path.isfile(os.path.join(voices_dir, f))]
# Create dictionary with filename (without extension) as key
for wav_file in wav_files:
# Remove .wav extension to get the name
name = os.path.splitext(wav_file)[0]
# Create full path
full_path = os.path.join(voices_dir, wav_file)
self.voice_presets[name] = full_path
# Sort the voice presets alphabetically by name for better UI
self.voice_presets = dict(sorted(self.voice_presets.items()))
# Filter out voices that don't exist (this is now redundant but kept for safety)
self.available_voices = {
name: path for name, path in self.voice_presets.items()
if os.path.exists(path)
}
if not self.available_voices:
raise gr.Error("No voice presets found. Please add .wav files to the demo/voices directory.")
print(f"Found {len(self.available_voices)} voice files in {voices_dir}")
print(f"Available voices: {', '.join(self.available_voices.keys())}")
def read_audio(self, audio_path: str, target_sr: int = 24000) -> np.ndarray:
"""Read and preprocess audio file."""
try:
wav, sr = sf.read(audio_path)
if len(wav.shape) > 1:
wav = np.mean(wav, axis=1)
if sr != target_sr:
wav = librosa.resample(wav, orig_sr=sr, target_sr=target_sr)
return wav
except Exception as e:
print(f"Error reading audio {audio_path}: {e}")
return np.array([])
def generate_podcast_streaming(self,
num_speakers: int,
script: str,
speaker_1: str = None,
speaker_2: str = None,
speaker_3: str = None,
speaker_4: str = None,
cfg_scale: float = 1.3) -> Iterator[tuple]:
try:
# Reset stop flag and set generating state
self.stop_generation = False
self.is_generating = True
# Validate inputs
if not script.strip():
self.is_generating = False
raise gr.Error("Error: Please provide a script.")
# Defend against common mistake
script = script.replace("’", "'")
if num_speakers < 1 or num_speakers > 4:
self.is_generating = False
raise gr.Error("Error: Number of speakers must be between 1 and 4.")
# Collect selected speakers
selected_speakers = [speaker_1, speaker_2, speaker_3, speaker_4][:num_speakers]
# Validate speaker selections
for i, speaker in enumerate(selected_speakers):
if not speaker or speaker not in self.available_voices:
self.is_generating = False
raise gr.Error(f"Error: Please select a valid speaker for Speaker {i+1}.")
# Build initial log
log = f"🎙️ Generating podcast with {num_speakers} speakers\n"
log += f"📊 Parameters: CFG Scale={cfg_scale}, Inference Steps={self.inference_steps}\n"
log += f"🎭 Speakers: {', '.join(selected_speakers)}\n"
# Check for stop signal
if self.stop_generation:
self.is_generating = False
yield None, "🛑 Generation stopped by user", gr.update(visible=False)
return
# Load voice samples
voice_samples = []
for speaker_name in selected_speakers:
audio_path = self.available_voices[speaker_name]
audio_data = self.read_audio(audio_path)
if len(audio_data) == 0:
self.is_generating = False
raise gr.Error(f"Error: Failed to load audio for {speaker_name}")
voice_samples.append(audio_data)
# log += f"✅ Loaded {len(voice_samples)} voice samples\n"
# Check for stop signal
if self.stop_generation:
self.is_generating = False
yield None, "🛑 Generation stopped by user", gr.update(visible=False)
return
# Parse script to assign speaker ID's
lines = script.strip().split('\n')
formatted_script_lines = []
for line in lines:
line = line.strip()
if not line:
continue
# Check if line already has speaker format
if line.startswith('Speaker ') and ':' in line:
formatted_script_lines.append(line)
else:
# Auto-assign to speakers in rotation
speaker_id = len(formatted_script_lines) % num_speakers
formatted_script_lines.append(f"Speaker {speaker_id}: {line}")
formatted_script = '\n'.join(formatted_script_lines)
log += f"📝 Formatted script with {len(formatted_script_lines)} turns\n\n"
log += "🔄 Processing with VibeVoice (streaming mode)...\n"
# Check for stop signal before processing
if self.stop_generation:
self.is_generating = False
yield None, "🛑 Generation stopped by user", gr.update(visible=False)
return
start_time = time.time()
inputs = self.processor(
text=[formatted_script],
voice_samples=[voice_samples],
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
# Create audio streamer
audio_streamer = AudioStreamer(
batch_size=1,
stop_signal=None,
timeout=None
)
# Store current streamer for potential stopping
self.current_streamer = audio_streamer
# Start generation in a separate thread
generation_thread = threading.Thread(
target=self._generate_with_streamer,
args=(inputs, cfg_scale, audio_streamer)
)
generation_thread.start()
# Wait for generation to actually start producing audio
time.sleep(1) # Reduced from 3 to 1 second
# Check for stop signal after thread start
if self.stop_generation:
audio_streamer.end()
generation_thread.join(timeout=5.0) # Wait up to 5 seconds for thread to finish
self.is_generating = False
yield None, "🛑 Generation stopped by user", gr.update(visible=False)
return
# Collect audio chunks as they arrive
sample_rate = 24000
all_audio_chunks = [] # For final statistics
pending_chunks = [] # Buffer for accumulating small chunks
chunk_count = 0
last_yield_time = time.time()
min_yield_interval = 15 # Yield every 15 seconds
min_chunk_size = sample_rate * 30 # At least 2 seconds of audio
# Get the stream for the first (and only) sample
audio_stream = audio_streamer.get_stream(0)
has_yielded_audio = False
has_received_chunks = False # Track if we received any chunks at all
for audio_chunk in audio_stream:
# Check for stop signal in the streaming loop
if self.stop_generation:
audio_streamer.end()
break
chunk_count += 1
has_received_chunks = True # Mark that we received at least one chunk
# Convert tensor to numpy
if torch.is_tensor(audio_chunk):
# Convert bfloat16 to float32 first, then to numpy
if audio_chunk.dtype == torch.bfloat16:
audio_chunk = audio_chunk.float()
audio_np = audio_chunk.cpu().numpy().astype(np.float32)
else:
audio_np = np.array(audio_chunk, dtype=np.float32)
# Ensure audio is 1D and properly normalized
if len(audio_np.shape) > 1:
audio_np = audio_np.squeeze()
# Convert to 16-bit for Gradio
audio_16bit = convert_to_16_bit_wav(audio_np)
# Store for final statistics
all_audio_chunks.append(audio_16bit)
# Add to pending chunks buffer
pending_chunks.append(audio_16bit)
# Calculate pending audio size
pending_audio_size = sum(len(chunk) for chunk in pending_chunks)
current_time = time.time()
time_since_last_yield = current_time - last_yield_time
# Decide whether to yield
should_yield = False
if not has_yielded_audio and pending_audio_size >= min_chunk_size:
# First yield: wait for minimum chunk size
should_yield = True
has_yielded_audio = True
elif has_yielded_audio and (pending_audio_size >= min_chunk_size or time_since_last_yield >= min_yield_interval):
# Subsequent yields: either enough audio or enough time has passed
should_yield = True
if should_yield and pending_chunks:
# Concatenate and yield only the new audio chunks
new_audio = np.concatenate(pending_chunks)
new_duration = len(new_audio) / sample_rate
total_duration = sum(len(chunk) for chunk in all_audio_chunks) / sample_rate
log_update = log + f"🎵 Streaming: {total_duration:.1f}s generated (chunk {chunk_count})\n"
# Yield streaming audio chunk and keep complete_audio as None during streaming
yield (sample_rate, new_audio), None, log_update, gr.update(visible=True)
# Clear pending chunks after yielding
pending_chunks = []
last_yield_time = current_time
# Yield any remaining chunks
if pending_chunks:
final_new_audio = np.concatenate(pending_chunks)
total_duration = sum(len(chunk) for chunk in all_audio_chunks) / sample_rate
log_update = log + f"🎵 Streaming final chunk: {total_duration:.1f}s total\n"
yield (sample_rate, final_new_audio), None, log_update, gr.update(visible=True)
has_yielded_audio = True # Mark that we yielded audio
# Wait for generation to complete (with timeout to prevent hanging)
generation_thread.join(timeout=5.0) # Increased timeout to 5 seconds
# If thread is still alive after timeout, force end
if generation_thread.is_alive():
print("Warning: Generation thread did not complete within timeout")
audio_streamer.end()
generation_thread.join(timeout=5.0)
# Clean up
self.current_streamer = None
self.is_generating = False
generation_time = time.time() - start_time
# Check if stopped by user
if self.stop_generation:
yield None, None, "🛑 Generation stopped by user", gr.update(visible=False)
return
# Debug logging
# print(f"Debug: has_received_chunks={has_received_chunks}, chunk_count={chunk_count}, all_audio_chunks length={len(all_audio_chunks)}")
# Check if we received any chunks but didn't yield audio
if has_received_chunks and not has_yielded_audio and all_audio_chunks:
# We have chunks but didn't meet the yield criteria, yield them now
complete_audio = np.concatenate(all_audio_chunks)
final_duration = len(complete_audio) / sample_rate
final_log = log + f"⏱️ Generation completed in {generation_time:.2f} seconds\n"
final_log += f"🎵 Final audio duration: {final_duration:.2f} seconds\n"
final_log += f"📊 Total chunks: {chunk_count}\n"
final_log += "✨ Generation successful! Complete audio is ready.\n"
final_log += "💡 Not satisfied? You can regenerate or adjust the CFG scale for different results."
# Yield the complete audio
yield None, (sample_rate, complete_audio), final_log, gr.update(visible=False)
return
if not has_received_chunks:
error_log = log + f"\n❌ Error: No audio chunks were received from the model. Generation time: {generation_time:.2f}s"
yield None, None, error_log, gr.update(visible=False)
return
if not has_yielded_audio:
error_log = log + f"\n❌ Error: Audio was generated but not streamed. Chunk count: {chunk_count}"
yield None, None, error_log, gr.update(visible=False)
return
# Prepare the complete audio
if all_audio_chunks:
complete_audio = np.concatenate(all_audio_chunks)
final_duration = len(complete_audio) / sample_rate
final_log = log + f"⏱️ Generation completed in {generation_time:.2f} seconds\n"
final_log += f"🎵 Final audio duration: {final_duration:.2f} seconds\n"
final_log += f"📊 Total chunks: {chunk_count}\n"
final_log += "✨ Generation successful! Complete audio is ready in the 'Complete Audio' tab.\n"
final_log += "💡 Not satisfied? You can regenerate or adjust the CFG scale for different results."
# Final yield: Clear streaming audio and provide complete audio
yield None, (sample_rate, complete_audio), final_log, gr.update(visible=False)
else:
final_log = log + "❌ No audio was generated."
yield None, None, final_log, gr.update(visible=False)
except gr.Error as e:
# Handle Gradio-specific errors (like input validation)
self.is_generating = False
self.current_streamer = None
error_msg = f"❌ Input Error: {str(e)}"
print(error_msg)
yield None, None, error_msg, gr.update(visible=False)
except Exception as e:
self.is_generating = False
self.current_streamer = None
error_msg = f"❌ An unexpected error occurred: {str(e)}"
print(error_msg)
import traceback
traceback.print_exc()
yield None, None, error_msg, gr.update(visible=False)
def _generate_with_streamer(self, inputs, cfg_scale, audio_streamer):
"""Helper method to run generation with streamer in a separate thread."""
try:
# Check for stop signal before starting generation
if self.stop_generation:
audio_streamer.end()
return
# Define a stop check function that can be called from generate
def check_stop_generation():
return self.stop_generation
outputs = self.model.generate(
**inputs,
max_new_tokens=None,
cfg_scale=cfg_scale,
tokenizer=self.processor.tokenizer,
generation_config={
'do_sample': False,
},
audio_streamer=audio_streamer,
stop_check_fn=check_stop_generation, # Pass the stop check function
verbose=False, # Disable verbose in streaming mode
refresh_negative=True,
)
except Exception as e:
print(f"Error in generation thread: {e}")
traceback.print_exc()
# Make sure to end the stream on error
audio_streamer.end()
def stop_audio_generation(self):
"""Stop the current audio generation process."""
self.stop_generation = True
if self.current_streamer is not None:
try:
self.current_streamer.end()
except Exception as e:
print(f"Error stopping streamer: {e}")
print("🛑 Audio generation stop requested")
def load_example_scripts(self):
"""Load example scripts from the text_examples directory."""
examples_dir = os.path.join(os.path.dirname(__file__), "text_examples")
self.example_scripts = []
# Check if text_examples directory exists
if not os.path.exists(examples_dir):
print(f"Warning: text_examples directory not found at {examples_dir}")
return
# Get all .txt files in the text_examples directory
txt_files = sorted([f for f in os.listdir(examples_dir)
if f.lower().endswith('.txt') and os.path.isfile(os.path.join(examples_dir, f))])
for txt_file in txt_files:
file_path = os.path.join(examples_dir, txt_file)
import re
# Check if filename contains a time pattern like "45min", "90min", etc.
time_pattern = re.search(r'(\d+)min', txt_file.lower())
if time_pattern:
minutes = int(time_pattern.group(1))
if minutes > 15:
print(f"Skipping {txt_file}: duration {minutes} minutes exceeds 15-minute limit")
continue
try:
with open(file_path, 'r', encoding='utf-8') as f:
script_content = f.read().strip()
# Remove empty lines and lines with only whitespace
script_content = '\n'.join(line for line in script_content.split('\n') if line.strip())
if not script_content:
continue
# Parse the script to determine number of speakers
num_speakers = self._get_num_speakers_from_script(script_content)
# Add to examples list as [num_speakers, script_content]
self.example_scripts.append([num_speakers, script_content])
print(f"Loaded example: {txt_file} with {num_speakers} speakers")
except Exception as e:
print(f"Error loading example script {txt_file}: {e}")
if self.example_scripts:
print(f"Successfully loaded {len(self.example_scripts)} example scripts")
else:
print("No example scripts were loaded")
def _get_num_speakers_from_script(self, script: str) -> int:
"""Determine the number of unique speakers in a script."""
import re
speakers = set()
lines = script.strip().split('\n')
for line in lines:
# Use regex to find speaker patterns
match = re.match(r'^Speaker\s+(\d+)\s*:', line.strip(), re.IGNORECASE)
if match:
speaker_id = int(match.group(1))
speakers.add(speaker_id)
# If no speakers found, default to 1
if not speakers:
return 1
# Return the maximum speaker ID + 1 (assuming 0-based indexing)
# or the count of unique speakers if they're 1-based
max_speaker = max(speakers)
min_speaker = min(speakers)
if min_speaker == 0:
return max_speaker + 1
else:
# Assume 1-based indexing, return the count
return len(speakers)
def create_demo_interface(demo_instance: VibeVoiceDemo):
"""Create the Gradio interface with streaming support."""
# Custom CSS for high-end aesthetics with lighter theme
custom_css = """
/* Modern light theme with gradients */
.gradio-container {
background: linear-gradient(135deg, #f8fafc 0%, #e2e8f0 100%);
font-family: 'SF Pro Display', -apple-system, BlinkMacSystemFont, sans-serif;
}
/* Header styling */
.main-header {
background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
padding: 2rem;
border-radius: 20px;
margin-bottom: 2rem;
text-align: center;
box-shadow: 0 10px 40px rgba(102, 126, 234, 0.3);
}
.main-header h1 {
color: white;
font-size: 2.5rem;
font-weight: 700;
margin: 0;
text-shadow: 0 2px 4px rgba(0,0,0,0.3);
}
.main-header p {
color: rgba(255,255,255,0.9);
font-size: 1.1rem;
margin: 0.5rem 0 0 0;
}
/* Card styling */
.settings-card, .generation-card {
background: rgba(255, 255, 255, 0.8);
backdrop-filter: blur(10px);
border: 1px solid rgba(226, 232, 240, 0.8);
border-radius: 16px;
padding: 1.5rem;
margin-bottom: 1rem;
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.1);
}
/* Speaker selection styling */
.speaker-grid {
display: grid;
gap: 1rem;
margin-bottom: 1rem;
}
.speaker-item {
background: linear-gradient(135deg, #e2e8f0 0%, #cbd5e1 100%);
border: 1px solid rgba(148, 163, 184, 0.4);
border-radius: 12px;
padding: 1rem;
color: #374151;
font-weight: 500;
}
/* Streaming indicator */
.streaming-indicator {
display: inline-block;
width: 10px;
height: 10px;
background: #22c55e;
border-radius: 50%;
margin-right: 8px;
animation: pulse 1.5s infinite;
}
@keyframes pulse {
0% { opacity: 1; transform: scale(1); }
50% { opacity: 0.5; transform: scale(1.1); }
100% { opacity: 1; transform: scale(1); }
}
/* Queue status styling */
.queue-status {
background: linear-gradient(135deg, #f0f9ff 0%, #e0f2fe 100%);
border: 1px solid rgba(14, 165, 233, 0.3);
border-radius: 8px;
padding: 0.75rem;
margin: 0.5rem 0;
text-align: center;
font-size: 0.9rem;
color: #0369a1;
}
.generate-btn {
background: linear-gradient(135deg, #059669 0%, #0d9488 100%);
border: none;
border-radius: 12px;
padding: 1rem 2rem;
color: white;
font-weight: 600;
font-size: 1.1rem;
box-shadow: 0 4px 20px rgba(5, 150, 105, 0.4);
transition: all 0.3s ease;
}
.generate-btn:hover {
transform: translateY(-2px);
box-shadow: 0 6px 25px rgba(5, 150, 105, 0.6);
}
.stop-btn {
background: linear-gradient(135deg, #ef4444 0%, #dc2626 100%);
border: none;
border-radius: 12px;
padding: 1rem 2rem;
color: white;
font-weight: 600;
font-size: 1.1rem;
box-shadow: 0 4px 20px rgba(239, 68, 68, 0.4);
transition: all 0.3s ease;
}
.stop-btn:hover {
transform: translateY(-2px);
box-shadow: 0 6px 25px rgba(239, 68, 68, 0.6);
}
/* Audio player styling */
.audio-output {
background: linear-gradient(135deg, #f1f5f9 0%, #e2e8f0 100%);
border-radius: 16px;
padding: 1.5rem;
border: 1px solid rgba(148, 163, 184, 0.3);
}
.complete-audio-section {
margin-top: 1rem;
padding: 1rem;
background: linear-gradient(135deg, #f0fdf4 0%, #dcfce7 100%);
border: 1px solid rgba(34, 197, 94, 0.3);
border-radius: 12px;
}
/* Text areas */
.script-input, .log-output {
background: rgba(255, 255, 255, 0.9) !important;
border: 1px solid rgba(148, 163, 184, 0.4) !important;
border-radius: 12px !important;
color: #1e293b !important;
font-family: 'JetBrains Mono', monospace !important;
}
.script-input::placeholder {
color: #64748b !important;
}
/* Sliders */
.slider-container {
background: rgba(248, 250, 252, 0.8);
border: 1px solid rgba(226, 232, 240, 0.6);
border-radius: 8px;
padding: 1rem;
margin: 0.5rem 0;
}
/* Labels and text */
.gradio-container label {
color: #374151 !important;
font-weight: 600 !important;
}
.gradio-container .markdown {
color: #1f2937 !important;
}
/* Responsive design */
@media (max-width: 768px) {
.main-header h1 { font-size: 2rem; }
.settings-card, .generation-card { padding: 1rem; }
}
/* Random example button styling - more subtle professional color */
.random-btn {
background: linear-gradient(135deg, #64748b 0%, #475569 100%);
border: none;
border-radius: 12px;
padding: 1rem 1.5rem;
color: white;
font-weight: 600;
font-size: 1rem;
box-shadow: 0 4px 20px rgba(100, 116, 139, 0.3);
transition: all 0.3s ease;
display: inline-flex;
align-items: center;
gap: 0.5rem;
}
.random-btn:hover {
transform: translateY(-2px);
box-shadow: 0 6px 25px rgba(100, 116, 139, 0.4);
background: linear-gradient(135deg, #475569 0%, #334155 100%);
}
"""
with gr.Blocks(
title="VibeVoice - AI Podcast Generator",
css=custom_css,
theme=gr.themes.Soft(
primary_hue="blue",
secondary_hue="purple",
neutral_hue="slate",
)
) as interface:
# Header
gr.HTML("""
<div class="main-header">
<h1>🎙️ Vibe Podcasting </h1>
<p>Generating Long-form Multi-speaker AI Podcast with VibeVoice</p>
</div>
""")
with gr.Row():
# Left column - Settings
with gr.Column(scale=1, elem_classes="settings-card"):
gr.Markdown("### 🎛️ **Podcast Settings**")
# Number of speakers
num_speakers = gr.Slider(
minimum=1,
maximum=4,
value=2,
step=1,
label="Number of Speakers",
elem_classes="slider-container"
)
# Speaker selection
gr.Markdown("### 🎭 **Speaker Selection**")
available_speaker_names = list(demo_instance.available_voices.keys())
# default_speakers = available_speaker_names[:4] if len(available_speaker_names) >= 4 else available_speaker_names
default_speakers = ['en-Alice_woman', 'en-Carter_man', 'en-Frank_man', 'en-Maya_woman']
speaker_selections = []
for i in range(4):
default_value = default_speakers[i] if i < len(default_speakers) else None
speaker = gr.Dropdown(
choices=available_speaker_names,
value=default_value,
label=f"Speaker {i+1}",
visible=(i < 2), # Initially show only first 2 speakers
elem_classes="speaker-item"
)
speaker_selections.append(speaker)
# Advanced settings
gr.Markdown("### ⚙️ **Advanced Settings**")
# Sampling parameters (contains all generation settings)
with gr.Accordion("Generation Parameters", open=False):
cfg_scale = gr.Slider(
minimum=1.0,
maximum=2.0,
value=1.3,
step=0.05,
label="CFG Scale (Guidance Strength)",
# info="Higher values increase adherence to text",
elem_classes="slider-container"
)
# Right column - Generation
with gr.Column(scale=2, elem_classes="generation-card"):
gr.Markdown("### 📝 **Script Input**")
script_input = gr.Textbox(
label="Conversation Script",
placeholder="""Enter your podcast script here. You can format it as:
Speaker 1: Welcome to our podcast today!
Speaker 2: Thanks for having me. I'm excited to discuss...
Or paste text directly and it will auto-assign speakers.""",
lines=12,
max_lines=20,
elem_classes="script-input"
)
# Button row with Random Example on the left and Generate on the right
with gr.Row():
# Random example button (now on the left)
random_example_btn = gr.Button(
"🎲 Random Example",
size="lg",
variant="secondary",
elem_classes="random-btn",
scale=1 # Smaller width
)
# Generate button (now on the right)
generate_btn = gr.Button(
"🚀 Generate Podcast",
size="lg",
variant="primary",
elem_classes="generate-btn",
scale=2 # Wider than random button
)
# Stop button
stop_btn = gr.Button(
"🛑 Stop Generation",
size="lg",
variant="stop",
elem_classes="stop-btn",
visible=False
)
# Streaming status indicator
streaming_status = gr.HTML(
value="""
<div style="background: linear-gradient(135deg, #dcfce7 0%, #bbf7d0 100%);
border: 1px solid rgba(34, 197, 94, 0.3);
border-radius: 8px;
padding: 0.75rem;
margin: 0.5rem 0;
text-align: center;
font-size: 0.9rem;
color: #166534;">
<span class="streaming-indicator"></span>
<strong>LIVE STREAMING</strong> - Audio is being generated in real-time
</div>
""",
visible=False,
elem_id="streaming-status"
)
# Output section
gr.Markdown("### 🎵 **Generated Podcast**")
# Streaming audio output (outside of tabs for simpler handling)
audio_output = gr.Audio(
label="Streaming Audio (Real-time)",
type="numpy",
elem_classes="audio-output",
streaming=True, # Enable streaming mode
autoplay=True,
show_download_button=False, # Explicitly show download button
visible=True
)
# Complete audio output (non-streaming)
complete_audio_output = gr.Audio(
label="Complete Podcast (Download after generation)",
type="numpy",
elem_classes="audio-output complete-audio-section",
streaming=False, # Non-streaming mode
autoplay=False,
show_download_button=True, # Explicitly show download button
visible=False # Initially hidden, shown when audio is ready
)
gr.Markdown("""
*💡 **Streaming**: Audio plays as it's being generated (may have slight pauses)
*💡 **Complete Audio**: Will appear below after generation finishes*
""")
# Generation log
log_output = gr.Textbox(
label="Generation Log",
lines=8,
max_lines=15,
interactive=False,
elem_classes="log-output"
)
def update_speaker_visibility(num_speakers):
updates = []
for i in range(4):
updates.append(gr.update(visible=(i < num_speakers)))
return updates
num_speakers.change(
fn=update_speaker_visibility,
inputs=[num_speakers],
outputs=speaker_selections
)
# Main generation function with streaming
def generate_podcast_wrapper(num_speakers, script, *speakers_and_params):
"""Wrapper function to handle the streaming generation call."""
try:
# Extract speakers and parameters
speakers = speakers_and_params[:4] # First 4 are speaker selections
cfg_scale = speakers_and_params[4] # CFG scale
# Clear outputs and reset visibility at start
yield None, gr.update(value=None, visible=False), "🎙️ Starting generation...", gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)
# The generator will yield multiple times
final_log = "Starting generation..."
for streaming_audio, complete_audio, log, streaming_visible in demo_instance.generate_podcast_streaming(
num_speakers=int(num_speakers),
script=script,
speaker_1=speakers[0],
speaker_2=speakers[1],
speaker_3=speakers[2],
speaker_4=speakers[3],
cfg_scale=cfg_scale
):
final_log = log
# Check if we have complete audio (final yield)
if complete_audio is not None:
# Final state: clear streaming, show complete audio
yield None, gr.update(value=complete_audio, visible=True), log, gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)
else:
# Streaming state: update streaming audio only
if streaming_audio is not None:
yield streaming_audio, gr.update(visible=False), log, streaming_visible, gr.update(visible=False), gr.update(visible=True)
else:
# No new audio, just update status
yield None, gr.update(visible=False), log, streaming_visible, gr.update(visible=False), gr.update(visible=True)
except Exception as e:
error_msg = f"❌ A critical error occurred in the wrapper: {str(e)}"
print(error_msg)
import traceback
traceback.print_exc()
# Reset button states on error
yield None, gr.update(value=None, visible=False), error_msg, gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)
def stop_generation_handler():
"""Handle stopping generation."""
demo_instance.stop_audio_generation()
# Return values for: log_output, streaming_status, generate_btn, stop_btn
return "🛑 Generation stopped.", gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)
# Add a clear audio function
def clear_audio_outputs():
"""Clear both audio outputs before starting new generation."""
return None, gr.update(value=None, visible=False)
# Connect generation button with streaming outputs
generate_btn.click(
fn=clear_audio_outputs,
inputs=[],
outputs=[audio_output, complete_audio_output],
queue=False
).then( # Immediate UI update to hide Generate, show Stop (non-queued)
fn=lambda: (gr.update(visible=False), gr.update(visible=True)),
inputs=[],
outputs=[generate_btn, stop_btn],
queue=False
).then(
fn=generate_podcast_wrapper,
inputs=[num_speakers, script_input] + speaker_selections + [cfg_scale],
outputs=[audio_output, complete_audio_output, log_output, streaming_status, generate_btn, stop_btn],
queue=True # Enable Gradio's built-in queue
)
# Connect stop button
stop_btn.click(
fn=stop_generation_handler,
inputs=[],
outputs=[log_output, streaming_status, generate_btn, stop_btn],
queue=False # Don't queue stop requests
).then(
# Clear both audio outputs after stopping
fn=lambda: (None, None),
inputs=[],
outputs=[audio_output, complete_audio_output],
queue=False
)
# Function to randomly select an example
def load_random_example():
"""Randomly select and load an example script."""
import random
# Get available examples
if hasattr(demo_instance, 'example_scripts') and demo_instance.example_scripts:
example_scripts = demo_instance.example_scripts
else:
# Fallback to default
example_scripts = [
[2, "Speaker 0: Welcome to our AI podcast demonstration!\nSpeaker 1: Thanks for having me. This is exciting!"]
]
# Randomly select one
if example_scripts:
selected = random.choice(example_scripts)
num_speakers_value = selected[0]
script_value = selected[1]
# Return the values to update the UI
return num_speakers_value, script_value
# Default values if no examples
return 2, ""
# Connect random example button
random_example_btn.click(
fn=load_random_example,
inputs=[],
outputs=[num_speakers, script_input],
queue=False # Don't queue this simple operation
)
# Add usage tips
gr.Markdown("""
### 💡 **Usage Tips**
- Click **🚀 Generate Podcast** to start audio generation
- **Live Streaming** tab shows audio as it's generated (may have slight pauses)
- **Complete Audio** tab provides the full, uninterrupted podcast after generation
- During generation, you can click **🛑 Stop Generation** to interrupt the process
- The streaming indicator shows real-time generation progress
""")
# Add example scripts
gr.Markdown("### 📚 **Example Scripts**")
# Use dynamically loaded examples if available, otherwise provide a default
if hasattr(demo_instance, 'example_scripts') and demo_instance.example_scripts:
example_scripts = demo_instance.example_scripts
else:
# Fallback to a simple default example if no scripts loaded
example_scripts = [
[1, "Speaker 1: Welcome to our AI podcast demonstration! This is a sample script showing how VibeVoice can generate natural-sounding speech."]
]
gr.Examples(
examples=example_scripts,
inputs=[num_speakers, script_input],
label="Try these example scripts:"
)
# --- Risks & limitations (footer) ---
gr.Markdown(
"""
## Risks and limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release).
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
""",
elem_classes="generation-card", # 可选:复用卡片样式
)
return interface
def convert_to_16_bit_wav(data):
# Check if data is a tensor and move to cpu
if torch.is_tensor(data):
data = data.detach().cpu().numpy()
# Ensure data is numpy array
data = np.array(data)
# Normalize to range [-1, 1] if it's not already
if np.max(np.abs(data)) > 1.0:
data = data / np.max(np.abs(data))
# Scale to 16-bit integer range
data = (data * 32767).astype(np.int16)
return data
def parse_args():
parser = argparse.ArgumentParser(description="VibeVoice Gradio Demo")
parser.add_argument(
"--model_path",
type=str,
default="/tmp/vibevoice-model",
help="Path to the VibeVoice model directory",
)
parser.add_argument(
"--device",
type=str,
default="cuda" if torch.cuda.is_available() else "cpu",
help="Device for inference",
)
parser.add_argument(
"--inference_steps",
type=int,
default=10,
help="Number of inference steps for DDPM (not exposed to users)",
)
parser.add_argument(
"--share",
action="store_true",
help="Share the demo publicly via Gradio",
)
parser.add_argument(
"--port",
type=int,
default=7860,
help="Port to run the demo on",
)
return parser.parse_args()
def main():
"""Main function to run the demo."""
args = parse_args()
set_seed(42) # Set a fixed seed for reproducibility
print("🎙️ Initializing VibeVoice Demo with Streaming Support...")
# Initialize demo instance
demo_instance = VibeVoiceDemo(
model_path=args.model_path,
device=args.device,
inference_steps=args.inference_steps
)
# Create interface
interface = create_demo_interface(demo_instance)
print(f"🚀 Launching demo on port {args.port}")
print(f"📁 Model path: {args.model_path}")
print(f"🎭 Available voices: {len(demo_instance.available_voices)}")
print(f"🔴 Streaming mode: ENABLED")
print(f"🔒 Session isolation: ENABLED")
# Launch the interface
try:
interface.queue(
max_size=20, # Maximum queue size
default_concurrency_limit=1 # Process one request at a time
).launch(
share=args.share,
# server_port=args.port,
server_name="0.0.0.0" if args.share else "127.0.0.1",
show_error=True,
show_api=False # Hide API docs for cleaner interface
)
except KeyboardInterrupt:
print("\n🛑 Shutting down gracefully...")
except Exception as e:
print(f"❌ Server error: {e}")
raise
if __name__ == "__main__":
main()
import argparse
import os
import re
import traceback
from typing import List, Tuple, Union, Dict, Any
import time
import torch
from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference
from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger(__name__)
class VoiceMapper:
"""Maps speaker names to voice file paths"""
def __init__(self):
self.setup_voice_presets()
# change name according to our preset wav file
new_dict = {}
for name, path in self.voice_presets.items():
if '_' in name:
name = name.split('_')[0]
if '-' in name:
name = name.split('-')[-1]
new_dict[name] = path
self.voice_presets.update(new_dict)
# print(list(self.voice_presets.keys()))
def setup_voice_presets(self):
"""Setup voice presets by scanning the voices directory."""
voices_dir = os.path.join(os.path.dirname(__file__), "voices")
# Check if voices directory exists
if not os.path.exists(voices_dir):
print(f"Warning: Voices directory not found at {voices_dir}")
self.voice_presets = {}
self.available_voices = {}
return
# Scan for all WAV files in the voices directory
self.voice_presets = {}
# Get all .wav files in the voices directory
wav_files = [f for f in os.listdir(voices_dir)
if f.lower().endswith('.wav') and os.path.isfile(os.path.join(voices_dir, f))]
# Create dictionary with filename (without extension) as key
for wav_file in wav_files:
# Remove .wav extension to get the name
name = os.path.splitext(wav_file)[0]
# Create full path
full_path = os.path.join(voices_dir, wav_file)
self.voice_presets[name] = full_path
# Sort the voice presets alphabetically by name for better UI
self.voice_presets = dict(sorted(self.voice_presets.items()))
# Filter out voices that don't exist (this is now redundant but kept for safety)
self.available_voices = {
name: path for name, path in self.voice_presets.items()
if os.path.exists(path)
}
print(f"Found {len(self.available_voices)} voice files in {voices_dir}")
print(f"Available voices: {', '.join(self.available_voices.keys())}")
def get_voice_path(self, speaker_name: str) -> str:
"""Get voice file path for a given speaker name"""
# First try exact match
if speaker_name in self.voice_presets:
return self.voice_presets[speaker_name]
# Try partial matching (case insensitive)
speaker_lower = speaker_name.lower()
for preset_name, path in self.voice_presets.items():
if preset_name.lower() in speaker_lower or speaker_lower in preset_name.lower():
return path
# Default to first voice if no match found
default_voice = list(self.voice_presets.values())[0]
print(f"Warning: No voice preset found for '{speaker_name}', using default voice: {default_voice}")
return default_voice
def parse_txt_script(txt_content: str) -> Tuple[List[str], List[str]]:
"""
Parse txt script content and extract speakers and their text
Fixed pattern: Speaker 1, Speaker 2, Speaker 3, Speaker 4
Returns: (scripts, speaker_numbers)
"""
lines = txt_content.strip().split('\n')
scripts = []
speaker_numbers = []
# Pattern to match "Speaker X:" format where X is a number
speaker_pattern = r'^Speaker\s+(\d+):\s*(.*)$'
current_speaker = None
current_text = ""
for line in lines:
line = line.strip()
if not line:
continue
match = re.match(speaker_pattern, line, re.IGNORECASE)
if match:
# If we have accumulated text from previous speaker, save it
if current_speaker and current_text:
scripts.append(f"Speaker {current_speaker}: {current_text.strip()}")
speaker_numbers.append(current_speaker)
# Start new speaker
current_speaker = match.group(1).strip()
current_text = match.group(2).strip()
else:
# Continue text for current speaker
if current_text:
current_text += " " + line
else:
current_text = line
# Don't forget the last speaker
if current_speaker and current_text:
scripts.append(f"Speaker {current_speaker}: {current_text.strip()}")
speaker_numbers.append(current_speaker)
return scripts, speaker_numbers
def parse_args():
parser = argparse.ArgumentParser(description="VibeVoice Processor TXT Input Test")
parser.add_argument(
"--model_path",
type=str,
default="microsoft/VibeVoice-1.5b",
help="Path to the HuggingFace model directory",
)
parser.add_argument(
"--txt_path",
type=str,
default="demo/text_examples/1p_abs.txt",
help="Path to the txt file containing the script",
)
parser.add_argument(
"--speaker_names",
type=str,
nargs='+',
default='Andrew',
help="Speaker names in order (e.g., --speaker_names Andrew Ava 'Bill Gates')",
)
parser.add_argument(
"--output_dir",
type=str,
default="./outputs",
help="Directory to save output audio files",
)
parser.add_argument(
"--device",
type=str,
default="cuda" if torch.cuda.is_available() else "cpu",
help="Device for tensor tests",
)
parser.add_argument(
"--cfg_scale",
type=float,
default=1.3,
help="CFG (Classifier-Free Guidance) scale for generation (default: 1.3)",
)
return parser.parse_args()
def main():
args = parse_args()
# Initialize voice mapper
voice_mapper = VoiceMapper()
# Check if txt file exists
if not os.path.exists(args.txt_path):
print(f"Error: txt file not found: {args.txt_path}")
return
# Read and parse txt file
print(f"Reading script from: {args.txt_path}")
with open(args.txt_path, 'r', encoding='utf-8') as f:
txt_content = f.read()
# Parse the txt content to get speaker numbers
scripts, speaker_numbers = parse_txt_script(txt_content)
if not scripts:
print("Error: No valid speaker scripts found in the txt file")
return
print(f"Found {len(scripts)} speaker segments:")
for i, (script, speaker_num) in enumerate(zip(scripts, speaker_numbers)):
print(f" {i+1}. Speaker {speaker_num}")
print(f" Text preview: {script[:100]}...")
# Map speaker numbers to provided speaker names
speaker_name_mapping = {}
speaker_names_list = args.speaker_names if isinstance(args.speaker_names, list) else [args.speaker_names]
for i, name in enumerate(speaker_names_list, 1):
speaker_name_mapping[str(i)] = name
print(f"\nSpeaker mapping:")
for speaker_num in set(speaker_numbers):
mapped_name = speaker_name_mapping.get(speaker_num, f"Speaker {speaker_num}")
print(f" Speaker {speaker_num} -> {mapped_name}")
# Map speakers to voice files using the provided speaker names
voice_samples = []
actual_speakers = []
# Get unique speaker numbers in order of first appearance
unique_speaker_numbers = []
seen = set()
for speaker_num in speaker_numbers:
if speaker_num not in seen:
unique_speaker_numbers.append(speaker_num)
seen.add(speaker_num)
for speaker_num in unique_speaker_numbers:
speaker_name = speaker_name_mapping.get(speaker_num, f"Speaker {speaker_num}")
voice_path = voice_mapper.get_voice_path(speaker_name)
voice_samples.append(voice_path)
actual_speakers.append(speaker_name)
print(f"Speaker {speaker_num} ('{speaker_name}') -> Voice: {os.path.basename(voice_path)}")
# Prepare data for model
full_script = '\n'.join(scripts)
full_script = full_script.replace("’", "'")
# Load processor
print(f"Loading processor & model from {args.model_path}")
processor = VibeVoiceProcessor.from_pretrained(args.model_path)
# Load model
try:
model = VibeVoiceForConditionalGenerationInference.from_pretrained(
args.model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
attn_implementation='flash_attention_2' # flash_attention_2 is recommended
)
except Exception as e:
print(f"[ERROR] : {type(e).__name__}: {e}")
print(traceback.format_exc())
print("Error loading the model. Trying to use SDPA. However, note that only flash_attention_2 has been fully tested, and using SDPA may result in lower audio quality.")
model = VibeVoiceForConditionalGenerationInference.from_pretrained(
args.model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
attn_implementation='sdpa'
)
model.eval()
model.set_ddpm_inference_steps(num_steps=10)
if hasattr(model.model, 'language_model'):
print(f"Language model attention: {model.model.language_model.config._attn_implementation}")
# Prepare inputs for the model
inputs = processor(
text=[full_script], # Wrap in list for batch processing
voice_samples=[voice_samples], # Wrap in list for batch processing
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
print(f"Starting generation with cfg_scale: {args.cfg_scale}")
# Generate audio
start_time = time.time()
outputs = model.generate(
**inputs,
max_new_tokens=None,
cfg_scale=args.cfg_scale,
tokenizer=processor.tokenizer,
# generation_config={'do_sample': False, 'temperature': 0.95, 'top_p': 0.95, 'top_k': 0},
generation_config={'do_sample': False},
verbose=True,
)
generation_time = time.time() - start_time
print(f"Generation time: {generation_time:.2f} seconds")
# Calculate audio duration and additional metrics
if outputs.speech_outputs and outputs.speech_outputs[0] is not None:
# Assuming 24kHz sample rate (common for speech synthesis)
sample_rate = 24000
audio_samples = outputs.speech_outputs[0].shape[-1] if len(outputs.speech_outputs[0].shape) > 0 else len(outputs.speech_outputs[0])
audio_duration = audio_samples / sample_rate
rtf = generation_time / audio_duration if audio_duration > 0 else float('inf')
print(f"Generated audio duration: {audio_duration:.2f} seconds")
print(f"RTF (Real Time Factor): {rtf:.2f}x")
else:
print("No audio output generated")
# Calculate token metrics
input_tokens = inputs['input_ids'].shape[1] # Number of input tokens
output_tokens = outputs.sequences.shape[1] # Total tokens (input + generated)
generated_tokens = output_tokens - input_tokens
print(f"Prefilling tokens: {input_tokens}")
print(f"Generated tokens: {generated_tokens}")
print(f"Total tokens: {output_tokens}")
# Save output
txt_filename = os.path.splitext(os.path.basename(args.txt_path))[0]
output_path = os.path.join(args.output_dir, f"{txt_filename}_generated.wav")
os.makedirs(args.output_dir, exist_ok=True)
processor.save_audio(
outputs.speech_outputs[0], # First (and only) batch item
output_path=output_path,
)
print(f"Saved output to {output_path}")
# Print summary
print("\n" + "="*50)
print("GENERATION SUMMARY")
print("="*50)
print(f"Input file: {args.txt_path}")
print(f"Output file: {output_path}")
print(f"Speaker names: {args.speaker_names}")
print(f"Number of unique speakers: {len(set(speaker_numbers))}")
print(f"Number of segments: {len(scripts)}")
print(f"Prefilling tokens: {input_tokens}")
print(f"Generated tokens: {generated_tokens}")
print(f"Total tokens: {output_tokens}")
print(f"Generation time: {generation_time:.2f} seconds")
print(f"Audio duration: {audio_duration:.2f} seconds")
print(f"RTF (Real Time Factor): {rtf:.2f}x")
print("="*50)
if __name__ == "__main__":
main()
Speaker 1: Hello everyone, and welcome to the VibeVoice podcast channel. I'm your host, Linda, and today I want to share some very interesting and authentic Chinese expressions with you.
Speaker 1: In Chinese, when you want to say something is super easy, just a simple task, you can use the phrase "小菜一碟". It literally means "a small dish of food", but it means "a piece of cake". For example, if you want to say, "Adding and subtracting three-digit numbers is a piece of cake for me", you can say.
Speaker 1: 三位数的加减法对我来说小菜一碟.
Speaker 1: The next phrase we’re going to learn is “你开玩笑吧”. It's a very common way to express disbelief, like "Are you kidding me?" or "You must be joking". For instance, when you hear an unbelievable piece of news such as your friend brought a T-shirt using 5000 dollars, you can say,
Speaker 1: 你开玩笑吧, 你花五千块钱买了一件衣服.
Speaker 1: Next, let's learn a phrase for when you suddenly understand something, like a "lightbulb moment". In Chinese, you can say "恍然大悟". It means you suddenly "see the light". For example, when you finally grasp a difficult math concept that has confused you for days, you can say.
Speaker 1: 我困惑这个公式好几天了, 但现在我恍然大悟, 终于明白了.
Speaker 1: For our last one, when you want to say something is super easy, you can use a very vivid phrase: "闭着眼睛都能做". It literally means "can do it with one's eyes closed". For example, if you want to say, "He can use this software with his eyes closed", you can say.
Speaker 1: 这个软件他闭着眼都能用."
Speaker 1: Well, that’s all the time we have for today. Thank you for listening. Please subscribe to VibeVoice, where we share all the interesting things in this world with you.
\ No newline at end of file
Speaker 1: Generating long-form, multi-speaker conversational audio like podcasts poses significant challenges for traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking. This report presents VibeVoice, a novel model designed to synthesize long-form speech with multiple speakers by employing the next-token diffusion framework, a unified method for modeling continuous data by autoregressively generating latent vectors via diffusion.
Speaker 1: A core component of our approach is the continuous speech tokenizers operating at an ultra-low frame rate of 7.5. This tokenizer effectively preserves audio fidelity while significantly boosting computational efficiency for processing long sequences. This enables VibeVoice to synthesize long-form speech for up to 90 minutes (in a 64K context window length) with up to 4 speakers, capturing the authentic conversational "vibe" and surpassing all known open-source and closed-source dialogue models (for example, Gemini 2.5 Pro Preview TTS). Code and checkpoint are available now.
\ No newline at end of file
Speaker 1: Hello everyone, and welcome to the VibeVoice podcast. I’m your host, Linda, and today we're getting into one of the biggest debates in all of sports: who's the greatest basketball player of all time? I'm so excited to have Thomas here to talk about it with me.
Speaker 2: Thanks so much for having me, Linda. You're absolutely right—this question always brings out some seriously strong feelings.
Speaker 1: Okay, so let's get right into it. For me, it has to be Michael Jordan. Six trips to the Finals, six championships. That kind of perfection is just incredible.
Speaker 2: Oh man, the first thing that always pops into my head is that shot against the Cleveland Cavaliers back in '89. Jordan just rises, hangs in the air forever, and just… sinks it. I remember jumping off my couch and yelling, "Oh man, is that true? That's Unbelievable!"
Speaker 1: Right?! That moment showed just how cold-blooded he was. And let's not forget the "flu game." He was so sick he could barely stand, but he still found a way to win.
Speaker 2: Yeah, that game was pure willpower. He just made winning feel so inevitable, like no matter how bad the situation looked, you just knew he'd figure it out.
Speaker 1: But then you have to talk about LeBron James. What always gets me is his longevity. I mean, twenty years and he's still playing at the highest level! It's insane.
Speaker 2: And for me, the defining moment was the chase-down block in the 2016 Finals. He did it for Cleveland, ending their 52-year championship drought. You know, he's basically the basketball equivalent of a Swiss Army knife, which is a big reason why he's the unquestionable vice goat.
Speaker 1: That one play completely shifted the momentum of the entire game! It’s the kind of highlight people are going to be talking about forever.
Speaker 2: And that's the thing with LeBron—he's not just a scorer. He’s a passer, a rebounder, a leader. He influences the game in every single way.
Speaker 1: That’s so true. Jordan brought fear to his opponents, but LeBron brings this sense of trust. His teammates just know he's going to make the right play.
Speaker 2: What a great way to put it! They're two totally different kinds of greatness, but both are so incredibly effective.
Speaker 1: And then, of course, you have to talk about Kobe Bryant. To me, he was the one who carried Jordan's spirit into a new generation.
Speaker 2: Absolutely. Kobe was all about obsession. His Mamba Mentality was so intense, I bet he practiced free throws in his sleep.
Speaker 1: What I’ll always remember is his final game. Sixty points! What a way to go out. That was pure Kobe—competitive right up until the very last second.
Speaker 2: It felt like a farewell masterpiece. He gave everything he had to the game, and that night, he gave it one last time.
Speaker 1: And twenty years with a single team! That kind of loyalty is just so rare these days.
Speaker 2: It really is. That's what separates him. Jordan defined dominance, LeBron defined versatility, but Kobe brought both that fire and that incredible loyalty.
Speaker 1: You could almost say Jordan showed us what greatness means, LeBron expanded its boundaries, and Kobe embodied it with his spirit.
Speaker 2: Yes, exactly! Three different paths, but all with that same single-minded obsession with victory.
Speaker 1: And that's why this conversation is so much fun. Greatness doesn't have just one face—it comes in all different forms.
Speaker 2: It sure does. And we were lucky enough to witness all three.
\ No newline at end of file
Speaker 1: Hey, remember "See You Again"?
Speaker 2: Yeah… from Furious 7, right? That song always hits deep.
Speaker 1: Let me try to sing a part of it for you.
Speaker 1: "It's been a long day… without you, my friend. And I'll tell you all about it when I see you again…"
Speaker 2: Wow… that line. Every time.
Speaker 1: Yeah, and then this part always makes me think of the people I've lost.
Speaker 1: "We've come a long way… from where we began. Oh, I'll tell you all about it when I see you again…"
Speaker 2: It's beautiful, really. It's not just sad—it's like… hopeful.
Speaker 1: Right? Like no matter how far apart we are, there's still that promise.
Speaker 2: I think that's what made it the perfect farewell for Paul Walker.
Speaker 1: Yeah. And the rap verse? It hits differently too.
Speaker 1: "How can we not talk about family, when family's all that we got?"
Speaker 2: That line's deep. Makes you realize what really matters.
Speaker 1: Exactly. It's more than a song—it's a tribute.
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment