Unverified Commit b4a1034a authored by Yang Yong (雍洋)'s avatar Yang Yong (雍洋) Committed by GitHub
Browse files

Update Readme (#488)

parent 77ed54f9
......@@ -20,7 +20,7 @@
## :fire: Latest News
- **November 21, 2025:** 🚀 We support the HunyuanVideo1.5 video generation model since Day 0. With the same number of GPUs, LightX2V can deliver a speed improvement of over 2 times and supports deployment on GPUs with lower memory (such as the 24GB RTX 4090). It also supports CFG/Ulysses parallelism, efficient offloading, TeaCache/MagCache technologies, and more. We will soon update our models on our [HuggingFace page](https://huggingface.co/lightx2v), including quantization, step distillation, VAE distillation, and other related models.
- **November 21, 2025:** 🚀 We support the HunyuanVideo1.5 video generation model since Day 0. With the same number of GPUs, LightX2V can achieve a speed improvement of over 2 times and supports deployment on GPUs with lower memory (such as the 24GB RTX 4090). It also supports CFG/Ulysses parallelism, efficient offloading, TeaCache/MagCache technologies, and more. We will soon update our models on our [HuggingFace page](https://huggingface.co/lightx2v), including quantization, step distillation, VAE distillation, and other related models. Refer to [this](https://github.com/ModelTC/LightX2V/tree/main/scripts/hunyuan_video_15) for usage tutorials.
## 💡 Quick Start
......
......@@ -20,7 +20,7 @@
## :fire: 最新动态
- **2025年11月21日:** 🚀 我们Day0支持了HunyuanVideo1.5的视频生成模型,同样GPU数量,LightX2V可带来约2倍以上的速度提升,并支持更低显存GPU部署(如24G RTX4090)。支持CFG并行/Ulysses并行,高效Offload,TeaCache/MagCache等技术。同时支持沐曦,寒武纪等国产芯片部署。我们很快将在我们的[HuggingFace主页](https://huggingface.co/lightx2v)更新量化,步数蒸馏,VAE蒸馏等相关模型。
- **2025年11月21日:** 🚀 我们Day0支持了HunyuanVideo1.5的视频生成模型,同样GPU数量,LightX2V可带来约2倍以上的速度提升,并支持更低显存GPU部署(如24G RTX4090)。支持CFG并行/Ulysses并行,高效Offload,TeaCache/MagCache等技术。同时支持沐曦,寒武纪等国产芯片部署。我们很快将在我们的[HuggingFace主页](https://huggingface.co/lightx2v)更新量化,步数蒸馏,VAE蒸馏等相关模型。使用教程参考[这里](https://github.com/ModelTC/LightX2V/tree/main/scripts/hunyuan_video_15)
## 💡 快速开始
......
# HunyuanVideo1.5
## Quick Start
1. Prepare docker environment:
```bash
docker pull lightx2v/lightx2v:25111101-cu128
```
2. Run the container:
```bash
docker run --gpus all -itd --ipc=host --name [container_name] -v [mount_settings] --entrypoint /bin/bash [image_id]
```
3. Prepare the models
Please follow the instructions in [HunyuanVideo1.5 Github](https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/main/checkpoints-download.md) to download and place the model files.
4. Run the script
```bash
# enter the docker container
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V/scripts/hunyuan_video_15
# set LightX2V path and model path in the script
bash run_hy15_t2v_480p.sh
```
5. Check results
You can find the generated video files in the `save_results` folder.
6. Modify detailed configurations
You can refer to the config file pointed to by `--config_json` in the script and modify its parameters as needed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment