README.md 2.46 KB
Newer Older
Dongz's avatar
Dongz committed
1
# LightX2V: Light Video Generation Inference Framework
helloyongyang's avatar
helloyongyang committed
2

helloyongyang's avatar
helloyongyang committed
3
<div align="center" id="lightx2v">
Yang Yong(雍洋)'s avatar
Yang Yong(雍洋) committed
4
<img alt="logo" src="assets/img_lightx2v.png" width=75%></img>
helloyongyang's avatar
helloyongyang committed
5

helloyongyang's avatar
helloyongyang committed
6
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
PengGao's avatar
PengGao committed
7
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v)
helloyongyang's avatar
helloyongyang committed
8
9
[![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest)
[![Doc](https://img.shields.io/badge/文档-中文-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest)
helloyongyang's avatar
helloyongyang committed
10
[![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/lightx2v/lightx2v/tags)
PengGao's avatar
PengGao committed
11

helloyongyang's avatar
helloyongyang committed
12
13
14
</div>

--------------------------------------------------------------------------------
helloyongyang's avatar
helloyongyang committed
15

helloyongyang's avatar
helloyongyang committed
16
**LightX2V** is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. X2V means transforming different input modalities (such as text or images) to video output.
helloyongyang's avatar
helloyongyang committed
17
18
19
20
21
22
23
24
25


## How to Start

Please refer to our documentation:

[English Docs](https://lightx2v-en.readthedocs.io/en/latest/) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)


helloyongyang's avatar
helloyongyang committed
26
27
28
29
30
31
32
33
34
35
## Supported Model List

[HunyuanVideo-T2V](https://huggingface.co/tencent/HunyuanVideo)

[HunyuanVideo-I2V](https://huggingface.co/tencent/HunyuanVideo-I2V)

[Wan2.1-T2V](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B)

[Wan2.1-I2V](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)

helloyongyang's avatar
helloyongyang committed
36
37
[Wan2.1-T2V-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill)

38
39
[Wan2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid)

40
[SkyReels-V2-DF](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P)
helloyongyang's avatar
helloyongyang committed
41

Watebear's avatar
Watebear committed
42
[CogVideoX1.5-5B-T2V](https://huggingface.co/THUDM/CogVideoX1.5-5B)
43

Dongz's avatar
Dongz committed
44

45
## Contributing Guidelines
Dongz's avatar
Dongz committed
46

helloyongyang's avatar
helloyongyang committed
47
We have prepared a `pre-commit` hook to enforce consistent code formatting across the project.
Dongz's avatar
Dongz committed
48
49
50
51

1. Install the required dependencies:

```shell
helloyongyang's avatar
helloyongyang committed
52
pip install ruff pre-commit
Dongz's avatar
Dongz committed
53
54
```

55
2. Then, run the following command before commit:
Dongz's avatar
Dongz committed
56
57

```shell
helloyongyang's avatar
helloyongyang committed
58
pre-commit run --all-files
Dongz's avatar
Dongz committed
59
60
```

61
Thank you for your contributions!
helloyongyang's avatar
helloyongyang committed
62
63
64
65
66


## Acknowledgments

We built the code for this repository by referencing the code repositories involved in all the models mentioned above.