Commit b4af4e0c authored by luopl's avatar luopl
Browse files

"Initial commit"

parents
# Initially taken from Github's Python gitignore file
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# tests and logs
tests/fixtures/cached_*_text.txt
logs/
lightning_logs/
lang_code_data/
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# vscode
.vs
.vscode
# Pycharm
.idea
# TF code
tensorflow_code
# Models
proc_data
# examples
runs
/runs_old
/wandb
/examples/runs
/examples/**/*.args
/examples/rag/sweep
# data
/data
serialization_dir
# emacs
*.*~
debug.env
# vim
.*.swp
#ctags
tags
# pre-commit
.pre-commit*
# .lock
*.lock
# DS_Store (MacOS)
.DS_Store
# ruff
.ruff_cache
# our proj
/output/
/outputs/
/checkpoint/
/checkpoints/
exp
.gradio/
MIT License
Copyright (c) 2025 Microsoft
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# VibeVoice
## 论文
`VibeVoice Technical Report`
- https://arxiv.org/abs/2508.19205
## 模型结构
传统的文本转语音系统面临着几个长期难以克服的挑战:生成语音长度有限、多说话人支持不足、以及长语音中的音色漂移和语义断裂问题。 VibeVoice-1.5B 的出现,有效地解决了这些痛点。
- 超长语音生成:此前多数TTS模型只能合成60分钟以内的语音,并且在30分钟后通常会出现音质下降问题。VibeVoice-1.5B实现了单次生成90分钟高质量语音的能力,为有声书、播客等长内容制作打开了新天地。
- 多说话人支持:模型最多可模拟4位不同说话者的自然轮换对话,远超此前开源模型(如SesameAILabs-CSM、HiggsAudio-V2)最多支持2人的限制。
- 卓越的压缩效率:该模型对24kHz原始音频可实现3200倍的累计压缩率,其压缩效率是主流Encodec模型的80倍,同时仍能保持高保真语音效果。
<div align=center>
<img src="./Figures/arch.png"/>
</div>
## 算法原理
VibeVoice-1.5B的创新实现,得益于其多项前沿技术的结合:
- 双分词器协同工作:模型首创了声学(Acoustic)与语义(Semantic)双分词器架构。
- 声学分词器采用σ-VAE结构,负责保留声音特征并实现极致压缩,将24kHz原始音频压缩至3200分之一。
- 语义分词器则通过语音识别代理任务训练,确保对话的语义得以保留,有效解决了音色与语义不匹配的传统难题。
- 强大的基础模型:该模型基于1.5B参数的Qwen2.5语言模型,使其能够理解和处理复杂的文本上下文。
- 扩散解码器:在解码端,模型采用了1.23亿参数的扩散解码器,结合分类器自由引导和DPM-Solver算法,显著提升了音质与细节表现。
## 环境配置
### 硬件需求
DCU型号:K100_AI,节点数量:1台,卡数:1张。
`-v 路径``docker_name``imageID`根据实际情况修改
### Docker(方法一)
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.4.1-ubuntu22.04-dtk25.04.1-py3.10
docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash
cd /your_code_path/VibeVoice_pytorch
pip install -e .
pip install peft==0.17.0
apt update && apt install ffmpeg -y
```
### Dockerfile(方法二)
此处提供dockerfile的使用方法
```
docker build --no-cache -t VibeVoice:latest .
docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash
cd /your_code_path/VibeVoice_pytorch
pip install -e .
pip install peft==0.17.0
apt update && apt install ffmpeg -y
```
### Anaconda(方法三)
此处提供本地配置、编译的详细步骤,例如:
关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.sourcefind.cn/tool/)开发者社区下载安装。
```
DTK驱动:dtk25.04.1
python:python3.10
torch: 2.4.1+das.opt1.dtk25041
```
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应`
其它非深度学习库参照以下安装:
```
cd /your_code_path/VibeVoice_pytorch
pip install -e .
pip install numpy accelerate peft==0.17.0
apt update && apt install ffmpeg -y
```
## 数据集
暂无
## 训练
暂无
## 推理
- Usage 1: Launch Gradio demo
```
# 无法访问外网建议先添加HF镜像export HF_ENDPOINT=https://hf-mirror.com
# For 1.5B model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-1.5B --share
# For Large model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-Large --share
```
-Usage 2: Inference from files directly
```
# We provide some LLM generated example scripts under demo/text_examples/ for demo
# 1 speaker
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/1p_abs.txt --speaker_names Alice
# or more speakers
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/2p_music.txt --speaker_names Alice Frank
```
## result
- Graio demo
<div align=center>
<img src="./Figures/results.png"/>
</div>
- txt_path demo/text_examples/1p_abs.txt
![1p_abs_generated.wav](./Figures/1p_abs_generated.wav)
### 精度
DCU与GPU精度一致,推理框架:pytorch。
## 应用场景
### 算法类别
`语音合成`
### 热点应用行业
`广媒,影视,动漫,医疗,家居,教育`
## 预训练权重
| Model | Context Length | Generation Length | Weight |
|-------|----------------|----------|----------|
| VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) |
| VibeVoice-Large| 32K | ~45 min | [HF link](https://huggingface.co/microsoft/VibeVoice-Large) |
## 源码仓库及问题反馈
- https://developer.sourcefind.cn/codes/modelzoo/vibevoice_pytorch
## 参考资料
- https://github.com/microsoft/VibeVoice
<div align="center">
## 🎙️ VibeVoice: A Frontier Long Conversational Text-to-Speech Model
[![Project Page](https://img.shields.io/badge/Project-Page-blue?logo=microsoft)](https://microsoft.github.io/VibeVoice)
[![Hugging Face](https://img.shields.io/badge/HuggingFace-Collection-orange?logo=huggingface)](https://huggingface.co/collections/microsoft/vibevoice-68a2ef24a875c44be47b034f)
[![Technical Report](https://img.shields.io/badge/Technical-Report-red?logo=adobeacrobatreader)](https://arxiv.org/pdf/2508.19205)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb)
[![Live Playground](https://img.shields.io/badge/Live-Playground-green?logo=gradio)](https://aka.ms/VibeVoice-Demo)
[![Colab](https://img.shields.io/badge/Run-Colab-orange?logo=googlecolab)](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb)
</div>
<!-- <div align="center">
<img src="Figures/log.png" alt="VibeVoice Logo" width="200">
</div> -->
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="Figures/VibeVoice_logo_white.png">
<img src="Figures/VibeVoice_logo.png" alt="VibeVoice Logo" width="300">
</picture>
</div>
VibeVoice is a novel framework designed for generating **expressive**, **long-form**, **multi-speaker** conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking.
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a [next-token diffusion](https://arxiv.org/abs/2412.08635) framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models.
<p align="left">
<img src="Figures/MOS-preference.png" alt="MOS Preference Results" height="260px">
<img src="Figures/VibeVoice.jpg" alt="VibeVoice Overview" height="250px" style="margin-right: 10px;">
</p>
### 🔥 News
- **[2025-08-26] 🎉 We Open Source the [VibeVoice-Large](https://huggingface.co/microsoft/VibeVoice-Large) model weights!**
- **[2025-08-28] 🎉 We provide a [Colab](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb) script for easy access to our model. Due to GPU memory limitations, only VibeVoice-1.5B is supported.**
### 📋 TODO
- [ ] Merge models into official Hugging Face repository ([PR](https://github.com/huggingface/transformers/pull/40546))
- [ ] Release example training code and documentation
- [ ] VibePod: End-to-end solution that creates podcasts from documents, webpages, or even a simple topic.
### 🎵 Demo Examples
**Video Demo**
We produced this video with [Wan2.2](https://github.com/Wan-Video/Wan2.2). We sincerely appreciate the Wan-Video team for their great work.
**English**
<div align="center">
https://github.com/user-attachments/assets/0967027c-141e-4909-bec8-091558b1b784
</div>
**Chinese**
<div align="center">
https://github.com/user-attachments/assets/322280b7-3093-4c67-86e3-10be4746c88f
</div>
**Cross-Lingual**
<div align="center">
https://github.com/user-attachments/assets/838d8ad9-a201-4dde-bb45-8cd3f59ce722
</div>
**Spontaneous Singing**
<div align="center">
https://github.com/user-attachments/assets/6f27a8a5-0c60-4f57-87f3-7dea2e11c730
</div>
**Long Conversation with 4 people**
<div align="center">
https://github.com/user-attachments/assets/a357c4b6-9768-495c-a576-1618f6275727
</div>
For more examples, see the [Project Page](https://microsoft.github.io/VibeVoice).
Try it on [Colab](https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb) or [Demo](https://aka.ms/VibeVoice-Demo).
## Models
| Model | Context Length | Generation Length | Weight |
|-------|----------------|----------|----------|
| VibeVoice-0.5B-Streaming | - | - | On the way |
| VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) |
| VibeVoice-Large| 32K | ~45 min | [HF link](https://huggingface.co/microsoft/VibeVoice-Large) |
## Installation
We recommend to use NVIDIA Deep Learning Container to manage the CUDA environment.
1. Launch docker
```bash
# NVIDIA PyTorch Container 24.07 / 24.10 / 24.12 verified.
# Later versions are also compatible.
sudo docker run --privileged --net=host --ipc=host --ulimit memlock=-1:-1 --ulimit stack=-1:-1 --gpus all --rm -it nvcr.io/nvidia/pytorch:24.07-py3
## If flash attention is not included in your docker environment, you need to install it manually
## Refer to https://github.com/Dao-AILab/flash-attention for installation instructions
# pip install flash-attn --no-build-isolation
```
2. Install from github
```bash
git clone https://github.com/microsoft/VibeVoice.git
cd VibeVoice/
pip install -e .
```
## Usages
### 🚨 Tips
We observed users may encounter occasional instability when synthesizing Chinese speech. We recommend:
- Using English punctuation even for Chinese text, preferably only commas and periods.
- Using the Large model variant, which is considerably more stable.
- If you found the generated voice speak too fast. Please try to chunk your text with multiple speaker turns with same speaker label.
We'd like to thank [PsiPi](https://huggingface.co/PsiPi) for sharing an interesting way for emotion control. Detials can be found via [discussion12](https://huggingface.co/microsoft/VibeVoice-1.5B/discussions/12).
### Usage 1: Launch Gradio demo
```bash
apt update && apt install ffmpeg -y # for demo
# For 1.5B model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-1.5B --share
# For Large model
python demo/gradio_demo.py --model_path microsoft/VibeVoice-Large --share
```
### Usage 2: Inference from files directly
```bash
# We provide some LLM generated example scripts under demo/text_examples/ for demo
# 1 speaker
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/1p_abs.txt --speaker_names Alice
# or more speakers
python demo/inference_from_file.py --model_path microsoft/VibeVoice-Large --txt_path demo/text_examples/2p_music.txt --speaker_names Alice Frank
```
## FAQ
#### Q1: Is this a pretrained model?
**A:** Yes, it's a pretrained model without any post-training or benchmark-specific optimizations. In a way, this makes VibeVoice very versatile and fun to use.
#### Q2: Randomly trigger Sounds / Music / BGM.
**A:** As you can see from our demo page, the background music or sounds are spontaneous. This means we can't directly control whether they are generated or not. The model is content-aware, and these sounds are triggered based on the input text and the chosen voice prompt.
Here are a few things we've noticed:
* If the voice prompt you use contains background music, the generated speech is more likely to have it as well. (The Large model is quite stable and effective at this—give it a try on the demo!)
* If the voice prompt is clean (no BGM), but the input text includes introductory words or phrases like "Welcome to," "Hello," or "However," background music might still appear.
* Speaker voice related, using "Alice" results in random BGM than others (fixed).
* In other scenarios, the Large model is more stable and has a lower probability of generating unexpected background music.
In fact, we intentionally decided not to denoise our training data because we think it's an interesting feature for BGM to show up at just the right moment. You can think of it as a little easter egg we left for you.
#### Q3: Text normalization?
**A:** We don't perform any text normalization during training or inference. Our philosophy is that a large language model should be able to handle complex user inputs on its own. However, due to the nature of the training data, you might still run into some corner cases.
#### Q4: Singing Capability.
**A:** Our training data **doesn't contain any music data**. The ability to sing is an emergent capability of the model (which is why it might sound off-key, even on a famous song like 'See You Again'). (The Large model is more likely to exhibit this than the 1.5B).
#### Q5: Some Chinese pronunciation errors.
**A:** The volume of Chinese data in our training set is significantly smaller than the English data. Additionally, certain special characters (e.g., Chinese quotation marks) may occasionally cause pronunciation issues.
#### Q6: Instability of cross-lingual transfer.
**A:** The model does exhibit strong cross-lingual transfer capabilities, including the preservation of accents, but its performance can be unstable. This is an emergent ability of the model that we have not specifically optimized. It's possible that a satisfactory result can be achieved through repeated sampling.
## Risks and limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release).
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
English and Chinese only: Transcripts in languages other than English or Chinese may result in unexpected audio outputs.
Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects.
Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations.
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
<!-- BEGIN MICROSOFT SECURITY.MD V1.0.0 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which
includes all source code repositories in our GitHub organizations.
**Please do not report security vulnerabilities through public GitHub issues.**
For security reporting information, locations, contact information, and policies,
please review the latest guidance for Microsoft repositories at
[https://aka.ms/SECURITY.md](https://aka.ms/SECURITY.md).
<!-- END MICROSOFT SECURITY.MD BLOCK -->
\ No newline at end of file
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# VibeVoice Colab — T4 Quickstart (1.5B)\n",
"\n",
"This notebook provides a quickstart guide to run VibeVoice on Colab with T4. The T4 GPU can only support the 1.5B model due to memory limitations. Please note that T4 can only use SDPA instead of flash_attention_2, which may result in unstable and lower audio quality. For the best TTS experience, we recommend trying the 7B model on a more powerful GPU.\n",
"\n",
"## Risks and Limitations\n",
"\n",
"While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release). Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content."
],
"metadata": {
"id": "WvIaUJD2y0yU"
},
"id": "WvIaUJD2y0yU"
},
{
"cell_type": "markdown",
"source": [
"## Step 1: Setup Environment"
],
"metadata": {
"id": "e8fTKYGx7DZk"
},
"id": "e8fTKYGx7DZk"
},
{
"cell_type": "code",
"source": [
"# Check for T4 GPU\n",
"import torch\n",
"if torch.cuda.is_available() and \"T4\" in torch.cuda.get_device_name(0):\n",
" print(\"✅ T4 GPU detected\")\n",
"else:\n",
" print(\"\"\"\n",
" ⚠️ WARNING: T4 GPU not detected\n",
"\n",
" The recommended runtime for this Colab notebook is \"T4 GPU\".\n",
"\n",
" To change the runtime type:\n",
"\n",
" 1. Click on \"Runtime\" in the top navigation menu\n",
" 2. Click on \"Change runtime type\"\n",
" 3. Select \"T4 GPU\"\n",
" 4. Click \"OK\" if a \"Disconnect and delete runtime\" window appears\n",
" 5. Click on \"Save\"\n",
"\n",
" \"\"\")\n",
"\n",
"# Clone the VibeVoice repository\n",
"![ -d /content/VibeVoice ] || git clone --quiet --branch main --depth 1 https://github.com/microsoft/VibeVoice.git /content/VibeVoice\n",
"print(\"✅ Cloned VibeVoice repository\")\n",
"\n",
"# Install project dependencies\n",
"!uv pip --quiet install --system -e /content/VibeVoice\n",
"print(\"✅ Installed dependencies\")\n",
"\n",
"# Download model (~3 minutes)\n",
"!HF_XET_HIGH_PERFORMANCE=1 hf download microsoft/VibeVoice-1.5B --quiet --local-dir /content/models/VibeVoice-1.5B > /dev/null\n",
"print(\"✅ Downloaded model: microsoft/VibeVoice-1.5B\")\n"
],
"metadata": {
"id": "4wxJ6QHM-ZOb"
},
"id": "4wxJ6QHM-ZOb",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Step 2: Create Transcript"
],
"metadata": {
"id": "pgKlV7153Ifi"
},
"id": "pgKlV7153Ifi"
},
{
"cell_type": "code",
"source": [
"%%writefile /content/my_transcript.txt\n",
"Speaker 1: Can I try VibeVoice with my own example?\n",
"Speaker 2: Of course! VibeVoice is open-source, built to benefit everyone - you're welcome to try it out.\n"
],
"metadata": {
"id": "Yc1N9EHswFxA"
},
"id": "Yc1N9EHswFxA",
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Step 3: Generate Audio"
],
"metadata": {
"id": "MBCC6s-F6_hP"
},
"id": "MBCC6s-F6_hP"
},
{
"cell_type": "code",
"source": [
"# Run Python script to generate audio from transcript\n",
"!python /content/VibeVoice/demo/inference_from_file.py \\\n",
" --model_path /content/models/VibeVoice-1.5B \\\n",
" --txt_path /content/my_transcript.txt \\\n",
" --speaker_names Alice Frank\n",
"\n",
"# Display audio controls\n",
"from IPython.display import Audio\n",
"Audio(\"/content/outputs/my_transcript_generated.wav\")\n"
],
"metadata": {
"id": "dYWsLJ-n0Npm"
},
"id": "dYWsLJ-n0Npm",
"execution_count": null,
"outputs": []
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": [],
"machine_shape": "hm",
"name": "VibeVoice_Colab.ipynb",
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
This diff is collapsed.
import argparse
import os
import re
import traceback
from typing import List, Tuple, Union, Dict, Any
import time
import torch
from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference
from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger(__name__)
class VoiceMapper:
"""Maps speaker names to voice file paths"""
def __init__(self):
self.setup_voice_presets()
# change name according to our preset wav file
new_dict = {}
for name, path in self.voice_presets.items():
if '_' in name:
name = name.split('_')[0]
if '-' in name:
name = name.split('-')[-1]
new_dict[name] = path
self.voice_presets.update(new_dict)
# print(list(self.voice_presets.keys()))
def setup_voice_presets(self):
"""Setup voice presets by scanning the voices directory."""
voices_dir = os.path.join(os.path.dirname(__file__), "voices")
# Check if voices directory exists
if not os.path.exists(voices_dir):
print(f"Warning: Voices directory not found at {voices_dir}")
self.voice_presets = {}
self.available_voices = {}
return
# Scan for all WAV files in the voices directory
self.voice_presets = {}
# Get all .wav files in the voices directory
wav_files = [f for f in os.listdir(voices_dir)
if f.lower().endswith('.wav') and os.path.isfile(os.path.join(voices_dir, f))]
# Create dictionary with filename (without extension) as key
for wav_file in wav_files:
# Remove .wav extension to get the name
name = os.path.splitext(wav_file)[0]
# Create full path
full_path = os.path.join(voices_dir, wav_file)
self.voice_presets[name] = full_path
# Sort the voice presets alphabetically by name for better UI
self.voice_presets = dict(sorted(self.voice_presets.items()))
# Filter out voices that don't exist (this is now redundant but kept for safety)
self.available_voices = {
name: path for name, path in self.voice_presets.items()
if os.path.exists(path)
}
print(f"Found {len(self.available_voices)} voice files in {voices_dir}")
print(f"Available voices: {', '.join(self.available_voices.keys())}")
def get_voice_path(self, speaker_name: str) -> str:
"""Get voice file path for a given speaker name"""
# First try exact match
if speaker_name in self.voice_presets:
return self.voice_presets[speaker_name]
# Try partial matching (case insensitive)
speaker_lower = speaker_name.lower()
for preset_name, path in self.voice_presets.items():
if preset_name.lower() in speaker_lower or speaker_lower in preset_name.lower():
return path
# Default to first voice if no match found
default_voice = list(self.voice_presets.values())[0]
print(f"Warning: No voice preset found for '{speaker_name}', using default voice: {default_voice}")
return default_voice
def parse_txt_script(txt_content: str) -> Tuple[List[str], List[str]]:
"""
Parse txt script content and extract speakers and their text
Fixed pattern: Speaker 1, Speaker 2, Speaker 3, Speaker 4
Returns: (scripts, speaker_numbers)
"""
lines = txt_content.strip().split('\n')
scripts = []
speaker_numbers = []
# Pattern to match "Speaker X:" format where X is a number
speaker_pattern = r'^Speaker\s+(\d+):\s*(.*)$'
current_speaker = None
current_text = ""
for line in lines:
line = line.strip()
if not line:
continue
match = re.match(speaker_pattern, line, re.IGNORECASE)
if match:
# If we have accumulated text from previous speaker, save it
if current_speaker and current_text:
scripts.append(f"Speaker {current_speaker}: {current_text.strip()}")
speaker_numbers.append(current_speaker)
# Start new speaker
current_speaker = match.group(1).strip()
current_text = match.group(2).strip()
else:
# Continue text for current speaker
if current_text:
current_text += " " + line
else:
current_text = line
# Don't forget the last speaker
if current_speaker and current_text:
scripts.append(f"Speaker {current_speaker}: {current_text.strip()}")
speaker_numbers.append(current_speaker)
return scripts, speaker_numbers
def parse_args():
parser = argparse.ArgumentParser(description="VibeVoice Processor TXT Input Test")
parser.add_argument(
"--model_path",
type=str,
default="microsoft/VibeVoice-1.5b",
help="Path to the HuggingFace model directory",
)
parser.add_argument(
"--txt_path",
type=str,
default="demo/text_examples/1p_abs.txt",
help="Path to the txt file containing the script",
)
parser.add_argument(
"--speaker_names",
type=str,
nargs='+',
default='Andrew',
help="Speaker names in order (e.g., --speaker_names Andrew Ava 'Bill Gates')",
)
parser.add_argument(
"--output_dir",
type=str,
default="./outputs",
help="Directory to save output audio files",
)
parser.add_argument(
"--device",
type=str,
default="cuda" if torch.cuda.is_available() else "cpu",
help="Device for tensor tests",
)
parser.add_argument(
"--cfg_scale",
type=float,
default=1.3,
help="CFG (Classifier-Free Guidance) scale for generation (default: 1.3)",
)
return parser.parse_args()
def main():
args = parse_args()
# Initialize voice mapper
voice_mapper = VoiceMapper()
# Check if txt file exists
if not os.path.exists(args.txt_path):
print(f"Error: txt file not found: {args.txt_path}")
return
# Read and parse txt file
print(f"Reading script from: {args.txt_path}")
with open(args.txt_path, 'r', encoding='utf-8') as f:
txt_content = f.read()
# Parse the txt content to get speaker numbers
scripts, speaker_numbers = parse_txt_script(txt_content)
if not scripts:
print("Error: No valid speaker scripts found in the txt file")
return
print(f"Found {len(scripts)} speaker segments:")
for i, (script, speaker_num) in enumerate(zip(scripts, speaker_numbers)):
print(f" {i+1}. Speaker {speaker_num}")
print(f" Text preview: {script[:100]}...")
# Map speaker numbers to provided speaker names
speaker_name_mapping = {}
speaker_names_list = args.speaker_names if isinstance(args.speaker_names, list) else [args.speaker_names]
for i, name in enumerate(speaker_names_list, 1):
speaker_name_mapping[str(i)] = name
print(f"\nSpeaker mapping:")
for speaker_num in set(speaker_numbers):
mapped_name = speaker_name_mapping.get(speaker_num, f"Speaker {speaker_num}")
print(f" Speaker {speaker_num} -> {mapped_name}")
# Map speakers to voice files using the provided speaker names
voice_samples = []
actual_speakers = []
# Get unique speaker numbers in order of first appearance
unique_speaker_numbers = []
seen = set()
for speaker_num in speaker_numbers:
if speaker_num not in seen:
unique_speaker_numbers.append(speaker_num)
seen.add(speaker_num)
for speaker_num in unique_speaker_numbers:
speaker_name = speaker_name_mapping.get(speaker_num, f"Speaker {speaker_num}")
voice_path = voice_mapper.get_voice_path(speaker_name)
voice_samples.append(voice_path)
actual_speakers.append(speaker_name)
print(f"Speaker {speaker_num} ('{speaker_name}') -> Voice: {os.path.basename(voice_path)}")
# Prepare data for model
full_script = '\n'.join(scripts)
full_script = full_script.replace("’", "'")
# Load processor
print(f"Loading processor & model from {args.model_path}")
processor = VibeVoiceProcessor.from_pretrained(args.model_path)
# Load model
try:
model = VibeVoiceForConditionalGenerationInference.from_pretrained(
args.model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
attn_implementation='flash_attention_2' # flash_attention_2 is recommended
)
except Exception as e:
print(f"[ERROR] : {type(e).__name__}: {e}")
print(traceback.format_exc())
print("Error loading the model. Trying to use SDPA. However, note that only flash_attention_2 has been fully tested, and using SDPA may result in lower audio quality.")
model = VibeVoiceForConditionalGenerationInference.from_pretrained(
args.model_path,
torch_dtype=torch.bfloat16,
device_map='cuda',
attn_implementation='sdpa'
)
model.eval()
model.set_ddpm_inference_steps(num_steps=10)
if hasattr(model.model, 'language_model'):
print(f"Language model attention: {model.model.language_model.config._attn_implementation}")
# Prepare inputs for the model
inputs = processor(
text=[full_script], # Wrap in list for batch processing
voice_samples=[voice_samples], # Wrap in list for batch processing
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
print(f"Starting generation with cfg_scale: {args.cfg_scale}")
# Generate audio
start_time = time.time()
outputs = model.generate(
**inputs,
max_new_tokens=None,
cfg_scale=args.cfg_scale,
tokenizer=processor.tokenizer,
# generation_config={'do_sample': False, 'temperature': 0.95, 'top_p': 0.95, 'top_k': 0},
generation_config={'do_sample': False},
verbose=True,
)
generation_time = time.time() - start_time
print(f"Generation time: {generation_time:.2f} seconds")
# Calculate audio duration and additional metrics
if outputs.speech_outputs and outputs.speech_outputs[0] is not None:
# Assuming 24kHz sample rate (common for speech synthesis)
sample_rate = 24000
audio_samples = outputs.speech_outputs[0].shape[-1] if len(outputs.speech_outputs[0].shape) > 0 else len(outputs.speech_outputs[0])
audio_duration = audio_samples / sample_rate
rtf = generation_time / audio_duration if audio_duration > 0 else float('inf')
print(f"Generated audio duration: {audio_duration:.2f} seconds")
print(f"RTF (Real Time Factor): {rtf:.2f}x")
else:
print("No audio output generated")
# Calculate token metrics
input_tokens = inputs['input_ids'].shape[1] # Number of input tokens
output_tokens = outputs.sequences.shape[1] # Total tokens (input + generated)
generated_tokens = output_tokens - input_tokens
print(f"Prefilling tokens: {input_tokens}")
print(f"Generated tokens: {generated_tokens}")
print(f"Total tokens: {output_tokens}")
# Save output
txt_filename = os.path.splitext(os.path.basename(args.txt_path))[0]
output_path = os.path.join(args.output_dir, f"{txt_filename}_generated.wav")
os.makedirs(args.output_dir, exist_ok=True)
processor.save_audio(
outputs.speech_outputs[0], # First (and only) batch item
output_path=output_path,
)
print(f"Saved output to {output_path}")
# Print summary
print("\n" + "="*50)
print("GENERATION SUMMARY")
print("="*50)
print(f"Input file: {args.txt_path}")
print(f"Output file: {output_path}")
print(f"Speaker names: {args.speaker_names}")
print(f"Number of unique speakers: {len(set(speaker_numbers))}")
print(f"Number of segments: {len(scripts)}")
print(f"Prefilling tokens: {input_tokens}")
print(f"Generated tokens: {generated_tokens}")
print(f"Total tokens: {output_tokens}")
print(f"Generation time: {generation_time:.2f} seconds")
print(f"Audio duration: {audio_duration:.2f} seconds")
print(f"RTF (Real Time Factor): {rtf:.2f}x")
print("="*50)
if __name__ == "__main__":
main()
Speaker 1: Hello everyone, and welcome to the VibeVoice podcast channel. I'm your host, Linda, and today I want to share some very interesting and authentic Chinese expressions with you.
Speaker 1: In Chinese, when you want to say something is super easy, just a simple task, you can use the phrase "小菜一碟". It literally means "a small dish of food", but it means "a piece of cake". For example, if you want to say, "Adding and subtracting three-digit numbers is a piece of cake for me", you can say.
Speaker 1: 三位数的加减法对我来说小菜一碟.
Speaker 1: The next phrase we’re going to learn is “你开玩笑吧”. It's a very common way to express disbelief, like "Are you kidding me?" or "You must be joking". For instance, when you hear an unbelievable piece of news such as your friend brought a T-shirt using 5000 dollars, you can say,
Speaker 1: 你开玩笑吧, 你花五千块钱买了一件衣服.
Speaker 1: Next, let's learn a phrase for when you suddenly understand something, like a "lightbulb moment". In Chinese, you can say "恍然大悟". It means you suddenly "see the light". For example, when you finally grasp a difficult math concept that has confused you for days, you can say.
Speaker 1: 我困惑这个公式好几天了, 但现在我恍然大悟, 终于明白了.
Speaker 1: For our last one, when you want to say something is super easy, you can use a very vivid phrase: "闭着眼睛都能做". It literally means "can do it with one's eyes closed". For example, if you want to say, "He can use this software with his eyes closed", you can say.
Speaker 1: 这个软件他闭着眼都能用."
Speaker 1: Well, that’s all the time we have for today. Thank you for listening. Please subscribe to VibeVoice, where we share all the interesting things in this world with you.
\ No newline at end of file
Speaker 1: Generating long-form, multi-speaker conversational audio like podcasts poses significant challenges for traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking. This report presents VibeVoice, a novel model designed to synthesize long-form speech with multiple speakers by employing the next-token diffusion framework, a unified method for modeling continuous data by autoregressively generating latent vectors via diffusion.
Speaker 1: A core component of our approach is the continuous speech tokenizers operating at an ultra-low frame rate of 7.5. This tokenizer effectively preserves audio fidelity while significantly boosting computational efficiency for processing long sequences. This enables VibeVoice to synthesize long-form speech for up to 90 minutes (in a 64K context window length) with up to 4 speakers, capturing the authentic conversational "vibe" and surpassing all known open-source and closed-source dialogue models (for example, Gemini 2.5 Pro Preview TTS). Code and checkpoint are available now.
\ No newline at end of file
Speaker 1: Hello everyone, and welcome to the VibeVoice podcast. I’m your host, Linda, and today we're getting into one of the biggest debates in all of sports: who's the greatest basketball player of all time? I'm so excited to have Thomas here to talk about it with me.
Speaker 2: Thanks so much for having me, Linda. You're absolutely right—this question always brings out some seriously strong feelings.
Speaker 1: Okay, so let's get right into it. For me, it has to be Michael Jordan. Six trips to the Finals, six championships. That kind of perfection is just incredible.
Speaker 2: Oh man, the first thing that always pops into my head is that shot against the Cleveland Cavaliers back in '89. Jordan just rises, hangs in the air forever, and just… sinks it. I remember jumping off my couch and yelling, "Oh man, is that true? That's Unbelievable!"
Speaker 1: Right?! That moment showed just how cold-blooded he was. And let's not forget the "flu game." He was so sick he could barely stand, but he still found a way to win.
Speaker 2: Yeah, that game was pure willpower. He just made winning feel so inevitable, like no matter how bad the situation looked, you just knew he'd figure it out.
Speaker 1: But then you have to talk about LeBron James. What always gets me is his longevity. I mean, twenty years and he's still playing at the highest level! It's insane.
Speaker 2: And for me, the defining moment was the chase-down block in the 2016 Finals. He did it for Cleveland, ending their 52-year championship drought. You know, he's basically the basketball equivalent of a Swiss Army knife, which is a big reason why he's the unquestionable vice goat.
Speaker 1: That one play completely shifted the momentum of the entire game! It’s the kind of highlight people are going to be talking about forever.
Speaker 2: And that's the thing with LeBron—he's not just a scorer. He’s a passer, a rebounder, a leader. He influences the game in every single way.
Speaker 1: That’s so true. Jordan brought fear to his opponents, but LeBron brings this sense of trust. His teammates just know he's going to make the right play.
Speaker 2: What a great way to put it! They're two totally different kinds of greatness, but both are so incredibly effective.
Speaker 1: And then, of course, you have to talk about Kobe Bryant. To me, he was the one who carried Jordan's spirit into a new generation.
Speaker 2: Absolutely. Kobe was all about obsession. His Mamba Mentality was so intense, I bet he practiced free throws in his sleep.
Speaker 1: What I’ll always remember is his final game. Sixty points! What a way to go out. That was pure Kobe—competitive right up until the very last second.
Speaker 2: It felt like a farewell masterpiece. He gave everything he had to the game, and that night, he gave it one last time.
Speaker 1: And twenty years with a single team! That kind of loyalty is just so rare these days.
Speaker 2: It really is. That's what separates him. Jordan defined dominance, LeBron defined versatility, but Kobe brought both that fire and that incredible loyalty.
Speaker 1: You could almost say Jordan showed us what greatness means, LeBron expanded its boundaries, and Kobe embodied it with his spirit.
Speaker 2: Yes, exactly! Three different paths, but all with that same single-minded obsession with victory.
Speaker 1: And that's why this conversation is so much fun. Greatness doesn't have just one face—it comes in all different forms.
Speaker 2: It sure does. And we were lucky enough to witness all three.
\ No newline at end of file
Speaker 1: Hey, remember "See You Again"?
Speaker 2: Yeah… from Furious 7, right? That song always hits deep.
Speaker 1: Let me try to sing a part of it for you.
Speaker 1: "It's been a long day… without you, my friend. And I'll tell you all about it when I see you again…"
Speaker 2: Wow… that line. Every time.
Speaker 1: Yeah, and then this part always makes me think of the people I've lost.
Speaker 1: "We've come a long way… from where we began. Oh, I'll tell you all about it when I see you again…"
Speaker 2: It's beautiful, really. It's not just sad—it's like… hopeful.
Speaker 1: Right? Like no matter how far apart we are, there's still that promise.
Speaker 2: I think that's what made it the perfect farewell for Paul Walker.
Speaker 1: Yeah. And the rap verse? It hits differently too.
Speaker 1: "How can we not talk about family, when family's all that we got?"
Speaker 2: That line's deep. Makes you realize what really matters.
Speaker 1: Exactly. It's more than a song—it's a tribute.
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment