**LightX2V** is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. **X2V means transforming different input modalities (such as text or images) to video output.**
**LightX2V** is an advanced lightweight video generation inference framework engineered to deliver efficient, high-performance video synthesis solutions. This unified platform integrates multiple state-of-the-art video generation techniques, supporting diverse generation tasks including text-to-video (T2V) and image-to-video (I2V). **X2V represents the transformation of different input modalities (X, such as text or images) into video output (V)**.
## 🚀 Core Features
## 💡 How to Start
### 🎯 **Ultimate Performance Optimization**
-**🔥 SOTA Inference Speed**: Achieve **~15x** acceleration through step distillation and operator optimization
-**⚡️ Revolutionary 4-Step Distillation**: Compress original 40-50 step inference to just 4 steps without CFG requirements
-**🛠️ Advanced Operator Support**: Integrated with cutting-edge operators including [Sage Attention](https://github.com/thu-ml/SageAttention), [Flash Attention](https://github.com/Dao-AILab/flash-attention), [Radial Attention](https://github.com/mit-han-lab/radial-attention), [q8-kernel](https://github.com/KONAKONA666/q8_kernels), [sgl-kernel](https://github.com/sgl-project/sglang/tree/main/sgl-kernel), [vllm](https://github.com/vllm-project/vllm)
Please refer to our documentation: **[English Docs](https://lightx2v-en.readthedocs.io/en/latest/) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)**.
### 💾 **Resource-Efficient Deployment**
-**💡 Breaking Hardware Barriers**: Run 14B models for 480P/720P video generation with only **8GB VRAM + 16GB RAM**
For detailed performance metrics and comparisons, please refer to our [benchmark documentation](https://github.com/ModelTC/LightX2V/blob/main/docs/EN/source/getting_started/benchmark_source.md).
[Detailed Service Deployment Guide →](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_service.html)
## 📚 Technical Documentation
### 📖 **Method Tutorials**
-[Model Quantization](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/quantization.html) - Comprehensive guide to quantization strategies
-[Gradio Deployment](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_gradio.html) - Web interface setup
-[Service Deployment](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_service.html) - Production API service deployment
## 🧾 Contributing Guidelines
## 🧾 Contributing Guidelines
We have prepared a pre-commit hook to enforce consistent code formatting across the project.
We maintain code quality through automated pre-commit hooks to ensure consistent formatting across the project.
> [!TIP]
> [!TIP]
> - Install the required dependencies:
> **Setup Instructions:**
>
>
> 1. Install required dependencies:
> ```shell
> ```shell
> pip install ruff pre-commit
> pip install ruff pre-commit
>```
> ```
>
> - Then, run the following command before commit:
>
>
> 2. Run before committing:
> ```shell
> ```shell
> pre-commit run --all-files
> pre-commit run --all-files
>```
> ```
Thank you for your contributions!
We appreciate your contributions to making LightX2V better!
## 🤝 Acknowledgments
## 🤝 Acknowledgments
We built the code for this repository by referencing the code repositories involved in all the models mentioned above.
We extend our gratitude to all the model repositories and research communities that inspired and contributed to the development of LightX2V. This framework builds upon the collective efforts of the open-source community.
## 🌟 Star History
## 🌟 Star History
[](https://star-history.com/#ModelTC/llmc&Timeline)
[](https://star-history.com/#ModelTC/lightx2v&Timeline)
## ✏️ Citation
## ✏️ Citation
If you find our framework useful to your research, please kindly cite our work:
If you find LightX2V useful in your research, please consider citing our work:
```
```bibtex
@misc{lightx2v,
@misc{lightx2v,
author = {lightx2v contributors},
author={LightX2V Contributors},
title = {LightX2V: Light Video Generation Inference Framework},
title={LightX2V: Lightweight Video Generation Inference Framework},