README.md 3.35 KB
Newer Older
Harahan's avatar
Harahan committed
1
2
<div align="center" style="font-family: charter;">
  <h1>⚡️ LightX2V:<br> Light Video Generation Inference Framework</h1>
helloyongyang's avatar
helloyongyang committed
3

Yang Yong(雍洋)'s avatar
Yang Yong(雍洋) committed
4
<img alt="logo" src="assets/img_lightx2v.png" width=75%></img>
helloyongyang's avatar
helloyongyang committed
5

helloyongyang's avatar
helloyongyang committed
6
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
PengGao's avatar
PengGao committed
7
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v)
Harahan's avatar
Harahan committed
8
9
[![GitHub Stars](https://img.shields.io/github/stars/ModelTC/lightx2v.svg?style=social&label=Star&maxAge=60)](https://github.com/ModelTC/lightx2v)
![visitors](https://komarev.com/ghpvc/?username=lightx2v&label=visitors)
helloyongyang's avatar
helloyongyang committed
10
11
[![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest)
[![Doc](https://img.shields.io/badge/文档-中文-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest)
helloyongyang's avatar
helloyongyang committed
12
[![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/lightx2v/lightx2v/tags)
PengGao's avatar
PengGao committed
13

Harahan's avatar
Harahan committed
14
15
**\[ English | [中文](README_zh.md) | [日本語](README_ja.md) \]**

helloyongyang's avatar
helloyongyang committed
16
17
18
</div>

--------------------------------------------------------------------------------
helloyongyang's avatar
helloyongyang committed
19

helloyongyang's avatar
helloyongyang committed
20
**LightX2V** is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. **X2V means transforming different input modalities (such as text or images) to video output.**
helloyongyang's avatar
helloyongyang committed
21
22


Harahan's avatar
Harahan committed
23
## 💡 How to Start
helloyongyang's avatar
helloyongyang committed
24

Harahan's avatar
Harahan committed
25
Please refer to our documentation: **[English Docs](https://lightx2v-en.readthedocs.io/en/latest/) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)**.
helloyongyang's avatar
helloyongyang committed
26

helloyongyang's avatar
helloyongyang committed
27

Harahan's avatar
Harahan committed
28
## 🤖 Supported Model List
helloyongyang's avatar
helloyongyang committed
29

Harahan's avatar
Harahan committed
30
31
32
33
34
35
36
37
-[HunyuanVideo-T2V](https://huggingface.co/tencent/HunyuanVideo)
-[HunyuanVideo-I2V](https://huggingface.co/tencent/HunyuanVideo-I2V)
-[Wan2.1-T2V](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B)
-[Wan2.1-I2V](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)
-[Wan2.1-T2V-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill)
-[Wan2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid)
-[SkyReels-V2-DF](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P)
-[CogVideoX1.5-5B-T2V](https://huggingface.co/THUDM/CogVideoX1.5-5B)
helloyongyang's avatar
helloyongyang committed
38
39


Harahan's avatar
Harahan committed
40
## 🧾 Contributing Guidelines
helloyongyang's avatar
helloyongyang committed
41

Harahan's avatar
Harahan committed
42
We have prepared a pre-commit hook to enforce consistent code formatting across the project.
helloyongyang's avatar
helloyongyang committed
43

Harahan's avatar
Harahan committed
44
45
46
47
48
49
50
51
52
53
54
55
> [!TIP]
> - Install the required dependencies:
>
> ```shell
> pip install ruff pre-commit
>```
>
> - Then, run the following command before commit:
>
> ```shell
> pre-commit run --all-files
>```
56

57

Harahan's avatar
Harahan committed
58
Thank you for your contributions!
Dongz's avatar
Dongz committed
59
60


Harahan's avatar
Harahan committed
61
## 🤝 Acknowledgments
Dongz's avatar
Dongz committed
62

Harahan's avatar
Harahan committed
63
We built the code for this repository by referencing the code repositories involved in all the models mentioned above.
Dongz's avatar
Dongz committed
64
65


Harahan's avatar
Harahan committed
66
## 🌟 Star History
Dongz's avatar
Dongz committed
67

Harahan's avatar
Harahan committed
68
[![Star History Chart](https://api.star-history.com/svg?repos=ModelTC/lightx2v&type=Timeline)](https://star-history.com/#ModelTC/llmc&Timeline)
Dongz's avatar
Dongz committed
69

helloyongyang's avatar
helloyongyang committed
70

Harahan's avatar
Harahan committed
71
## ✏️ Citation
helloyongyang's avatar
helloyongyang committed
72

Harahan's avatar
Harahan committed
73
If you find our framework useful to your research, please kindly cite our work:
helloyongyang's avatar
helloyongyang committed
74

Harahan's avatar
Harahan committed
75
76
77
78
```
@misc{lightx2v,
 author = {lightx2v contributors},
 title = {LightX2V: Light Video Generation Inference Framework},
Harahan's avatar
Harahan committed
79
 year = {2025},
Harahan's avatar
Harahan committed
80
81
82
83
84
 publisher = {GitHub},
 journal = {GitHub repository},
 howpublished = {\url{https://github.com/ModelTC/lightx2v}},
}
```