README.md 3.26 KB
Newer Older
Harahan's avatar
Harahan committed
1
2
<div align="center" style="font-family: charter;">
  <h1>⚡️ LightX2V:<br> Light Video Generation Inference Framework</h1>
helloyongyang's avatar
helloyongyang committed
3

Yang Yong(雍洋)'s avatar
Yang Yong(雍洋) committed
4
<img alt="logo" src="assets/img_lightx2v.png" width=75%></img>
helloyongyang's avatar
helloyongyang committed
5

helloyongyang's avatar
helloyongyang committed
6
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
PengGao's avatar
PengGao committed
7
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v)
helloyongyang's avatar
helloyongyang committed
8
9
[![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest)
[![Doc](https://img.shields.io/badge/文档-中文-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest)
helloyongyang's avatar
helloyongyang committed
10
[![Papers](https://img.shields.io/badge/论文集-中文-99cc2)](https://lightx2v-papers-zhcn.readthedocs.io/zh-cn/latest)
helloyongyang's avatar
helloyongyang committed
11
[![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/lightx2v/lightx2v/tags)
PengGao's avatar
PengGao committed
12

helloyongyang's avatar
helloyongyang committed
13
**\[ English | [中文](README_zh.md) \]**
Harahan's avatar
Harahan committed
14

helloyongyang's avatar
helloyongyang committed
15
16
17
</div>

--------------------------------------------------------------------------------
helloyongyang's avatar
helloyongyang committed
18

helloyongyang's avatar
helloyongyang committed
19
**LightX2V** is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. **X2V means transforming different input modalities (such as text or images) to video output.**
helloyongyang's avatar
helloyongyang committed
20
21


Harahan's avatar
Harahan committed
22
## 💡 How to Start
helloyongyang's avatar
helloyongyang committed
23

Harahan's avatar
Harahan committed
24
Please refer to our documentation: **[English Docs](https://lightx2v-en.readthedocs.io/en/latest/) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)**.
helloyongyang's avatar
helloyongyang committed
25

helloyongyang's avatar
helloyongyang committed
26

Harahan's avatar
Harahan committed
27
## 🤖 Supported Model List
helloyongyang's avatar
helloyongyang committed
28

Harahan's avatar
Harahan committed
29
30
31
32
-[HunyuanVideo-T2V](https://huggingface.co/tencent/HunyuanVideo)
-[HunyuanVideo-I2V](https://huggingface.co/tencent/HunyuanVideo-I2V)
-[Wan2.1-T2V](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B)
-[Wan2.1-I2V](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)
helloyongyang's avatar
helloyongyang committed
33
-[Wan2.1-T2V-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill) (recommended 🚀🚀🚀)
Harahan's avatar
Harahan committed
34
35
36
-[Wan2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid)
-[SkyReels-V2-DF](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P)
-[CogVideoX1.5-5B-T2V](https://huggingface.co/THUDM/CogVideoX1.5-5B)
helloyongyang's avatar
helloyongyang committed
37
38


Harahan's avatar
Harahan committed
39
## 🧾 Contributing Guidelines
helloyongyang's avatar
helloyongyang committed
40

Harahan's avatar
Harahan committed
41
We have prepared a pre-commit hook to enforce consistent code formatting across the project.
helloyongyang's avatar
helloyongyang committed
42

Harahan's avatar
Harahan committed
43
44
45
46
47
48
49
50
51
52
53
54
> [!TIP]
> - Install the required dependencies:
>
> ```shell
> pip install ruff pre-commit
>```
>
> - Then, run the following command before commit:
>
> ```shell
> pre-commit run --all-files
>```
55

56

Harahan's avatar
Harahan committed
57
Thank you for your contributions!
Dongz's avatar
Dongz committed
58
59


Harahan's avatar
Harahan committed
60
## 🤝 Acknowledgments
Dongz's avatar
Dongz committed
61

Harahan's avatar
Harahan committed
62
We built the code for this repository by referencing the code repositories involved in all the models mentioned above.
Dongz's avatar
Dongz committed
63
64


Harahan's avatar
Harahan committed
65
## 🌟 Star History
Dongz's avatar
Dongz committed
66

Harahan's avatar
Harahan committed
67
[![Star History Chart](https://api.star-history.com/svg?repos=ModelTC/lightx2v&type=Timeline)](https://star-history.com/#ModelTC/llmc&Timeline)
Dongz's avatar
Dongz committed
68

helloyongyang's avatar
helloyongyang committed
69

Harahan's avatar
Harahan committed
70
## ✏️ Citation
helloyongyang's avatar
helloyongyang committed
71

Harahan's avatar
Harahan committed
72
If you find our framework useful to your research, please kindly cite our work:
helloyongyang's avatar
helloyongyang committed
73

Harahan's avatar
Harahan committed
74
75
76
77
```
@misc{lightx2v,
 author = {lightx2v contributors},
 title = {LightX2V: Light Video Generation Inference Framework},
Harahan's avatar
Harahan committed
78
 year = {2025},
Harahan's avatar
Harahan committed
79
80
81
82
83
 publisher = {GitHub},
 journal = {GitHub repository},
 howpublished = {\url{https://github.com/ModelTC/lightx2v}},
}
```