Lightx2v is a lightweight video inference and generation engine that provides a web interface based on Gradio, supporting both Image-to-Video and Text-to-Video generation modes.
This project contains two main demo files:
-`gradio_demo.py` - English interface version
-`gradio_demo_zh.py` - Chinese interface version
## 🚀 Quick Start
### System Requirements
- Python 3.10+ (recommended)
- CUDA 12.4+ (recommended)
- At least 8GB GPU VRAM
- At least 16GB system memory
- At least 128GB SSD solid-state drive (**💾 Strongly recommend using SSD solid-state drives to store model files! During "lazy loading" startup, significantly improves model loading speed and inference performance**)
| ✅ [Wan2.1-I2V-14B-720P-Lightx2v-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill) | 720p | 14B | HD distilled version | High quality + fast inference |
#### 📝 Text-to-Video Models
| Model Name | Parameters | Features | Recommended Use |
| ✅ [Wan2.1-T2V-1.3B-Lightx2v](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill) | 1.3B | Lightweight | Fast prototyping and testing |
| ✅ [Wan2.1-T2V-14B-Lightx2v](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill) | 14B | Standard version | Balance speed and quality |
| ✅ [Wan2.1-T2V-14B-Lightx2v-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill) | 14B | Distilled optimized version | High quality + fast inference |
**💡 Tip**: Generally, after enabling "Auto-configure Inference Options", the system will automatically optimize parameter settings based on your hardware configuration, and performance issues usually won't occur. If you encounter problems, please refer to the following solutions:
1.**CUDA Memory Insufficient**
- Enable CPU offloading
- Reduce resolution
- Enable quantization options
2.**System Memory Insufficient**
- Enable CPU offloading
- Enable lazy loading option
- Enable quantization options
3.**Slow Generation Speed**
- Reduce inference steps
- Enable auto-configuration
- Use lightweight models
- Enable Tea Cache
- Use quantization operators
- 💾 **Check if models are stored on SSD**
4.**Slow Model Loading**
- 💾 **Migrate models to SSD storage**
- Enable lazy loading option
- Check disk I/O performance
- Consider using NVMe SSD
5.**Poor Video Quality**
- Increase inference steps
- Increase CFG scale factor
- Use 14B models
- Optimize prompts
### Log Viewing
```bash
# View inference logs
tail-f inference_logs.log
# View GPU usage
nvidia-smi
# View system resources
htop
```
**Note**: Please comply with relevant laws and regulations when using videos generated by this tool, and do not use them for illegal purposes.
info="Automatically optimize GPU settings to match the current resolution. After changing the resolution, please re-check this option to prevent potential performance degradation or runtime errors.",
)
withgr.Column(scale=9):
withgr.Column(scale=9):
seed=gr.Slider(
seed=gr.Slider(
label="Random Seed",
label="Random Seed",
...
@@ -836,14 +832,6 @@ def main():
...
@@ -836,14 +832,6 @@ def main():
withgr.Tab("⚙️ Advanced Options",id=2):
withgr.Tab("⚙️ Advanced Options",id=2):
withgr.Group(elem_classes="advanced-options"):
withgr.Group(elem_classes="advanced-options"):
gr.Markdown("### Auto configuration")
withgr.Row():
enable_auto_config=gr.Checkbox(
label="Auto configuration",
value=False,
info="Auto-tune optimization settings for your GPU",
parser.add_argument("--task",type=str,required=True,choices=["i2v","t2v"],help="Specify the task type. 'i2v' for image-to-video translation, 't2v' for text-to-video generation.")