-**November 21, 2025:** 🚀 We support the HunyuanVideo1.5 video generation model since Day 0. With the same number of GPUs, LightX2V can deliver a speed improvement of over 2 times and supports deployment on GPUs with lower memory (such as the 24GB RTX 4090). It also supports CFG/Ulysses parallelism, efficient offloading, TeaCache/MagCache technologies, and more. We will soon update our models on our [HuggingFace page](https://huggingface.co/lightx2v), including quantization, step distillation, VAE distillation, and other related models.
-**November 21, 2025:** 🚀 We support the HunyuanVideo1.5 video generation model since Day 0. With the same number of GPUs, LightX2V can achieve a speed improvement of over 2 times and supports deployment on GPUs with lower memory (such as the 24GB RTX 4090). It also supports CFG/Ulysses parallelism, efficient offloading, TeaCache/MagCache technologies, and more. We will soon update our models on our [HuggingFace page](https://huggingface.co/lightx2v), including quantization, step distillation, VAE distillation, and other related models. Refer to [this](https://github.com/ModelTC/LightX2V/tree/main/scripts/hunyuan_video_15) for usage tutorials.
docker run --gpus all -itd--ipc=host --name[container_name] -v[mount_settings] --entrypoint /bin/bash [image_id]
```
3. Prepare the models
Please follow the instructions in [HunyuanVideo1.5 Github](https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/main/checkpoints-download.md) to download and place the model files.
4. Run the script
```bash
# enter the docker container
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V/scripts/hunyuan_video_15
# set LightX2V path and model path in the script
bash run_hy15_t2v_480p.sh
```
5. Check results
You can find the generated video files in the `save_results` folder.
6. Modify detailed configurations
You can refer to the config file pointed to by `--config_json` in the script and modify its parameters as needed.