Commit c4a7a078 authored by Harahan's avatar Harahan
Browse files

update README

parent f516cd7c
# LightX2V: Light Video Generation Inference Framework <div align="center" style="font-family: charter;">
<h1>⚡️ LightX2V:<br> Light Video Generation Inference Framework</h1>
<div align="center" id="lightx2v">
<img alt="logo" src="assets/img_lightx2v.png" width=75%></img> <img alt="logo" src="assets/img_lightx2v.png" width=75%></img>
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v)
[![GitHub Stars](https://img.shields.io/github/stars/ModelTC/lightx2v.svg?style=social&label=Star&maxAge=60)](https://github.com/ModelTC/lightx2v)
![visitors](https://komarev.com/ghpvc/?username=lightx2v&label=visitors)
[![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest) [![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest)
[![Doc](https://img.shields.io/badge/文档-中文-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest) [![Doc](https://img.shields.io/badge/文档-中文-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest)
[![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/lightx2v/lightx2v/tags) [![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/lightx2v/lightx2v/tags)
**\[ English | [中文](README_zh.md) | [日本語](README_ja.md) \]**
</div> </div>
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
...@@ -16,51 +20,65 @@ ...@@ -16,51 +20,65 @@
**LightX2V** is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. **X2V means transforming different input modalities (such as text or images) to video output.** **LightX2V** is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. **X2V means transforming different input modalities (such as text or images) to video output.**
## How to Start ## 💡 How to Start
Please refer to our documentation:
[English Docs](https://lightx2v-en.readthedocs.io/en/latest/) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)
Please refer to our documentation: **[English Docs](https://lightx2v-en.readthedocs.io/en/latest/) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)**.
## Supported Model List
[HunyuanVideo-T2V](https://huggingface.co/tencent/HunyuanVideo) ## 🤖 Supported Model List
[HunyuanVideo-I2V](https://huggingface.co/tencent/HunyuanVideo-I2V) -[HunyuanVideo-T2V](https://huggingface.co/tencent/HunyuanVideo)
-[HunyuanVideo-I2V](https://huggingface.co/tencent/HunyuanVideo-I2V)
-[Wan2.1-T2V](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B)
-[Wan2.1-I2V](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)
-[Wan2.1-T2V-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill)
-[Wan2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid)
-[SkyReels-V2-DF](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P)
-[CogVideoX1.5-5B-T2V](https://huggingface.co/THUDM/CogVideoX1.5-5B)
[Wan2.1-T2V](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B)
[Wan2.1-I2V](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P) ## 🧾 Contributing Guidelines
[Wan2.1-T2V-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill) We have prepared a pre-commit hook to enforce consistent code formatting across the project.
[Wan2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid) > [!TIP]
> - Install the required dependencies:
>
> ```shell
> pip install ruff pre-commit
>```
>
> - Then, run the following command before commit:
>
> ```shell
> pre-commit run --all-files
>```
[SkyReels-V2-DF](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P)
[CogVideoX1.5-5B-T2V](https://huggingface.co/THUDM/CogVideoX1.5-5B)
Thank you for your contributions!
## Contributing Guidelines
We have prepared a `pre-commit` hook to enforce consistent code formatting across the project. ## 🤝 Acknowledgments
1. Install the required dependencies: We built the code for this repository by referencing the code repositories involved in all the models mentioned above.
```shell
pip install ruff pre-commit
```
2. Then, run the following command before commit: ## 🌟 Star History
```shell [![Star History Chart](https://api.star-history.com/svg?repos=ModelTC/lightx2v&type=Timeline)](https://star-history.com/#ModelTC/llmc&Timeline)
pre-commit run --all-files
```
Thank you for your contributions!
## ✏️ Citation
## Acknowledgments If you find our framework useful to your research, please kindly cite our work:
We built the code for this repository by referencing the code repositories involved in all the models mentioned above. ```
@misc{lightx2v,
author = {lightx2v contributors},
title = {LightX2V: Light Video Generation Inference Framework},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ModelTC/lightx2v}},
}
```
<div align="center" style="font-family: charter;">
<h1>⚡️ LightX2V:<br> 軽量ビデオ生成推論フレームワーク</h1>
<img alt="logo" src="assets/img_lightx2v.png" width=75%></img>
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v)
[![GitHub Stars](https://img.shields.io/github/stars/ModelTC/lightx2v.svg?style=social&label=Star&maxAge=60)](https://github.com/ModelTC/lightx2v)
![visitors](https://komarev.com/ghpvc/?username=lightx2v&label=visitors)
[![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest)
[![Doc](https://img.shields.io/badge/ドキュメント-日本語-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest)
[![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/lightx2v/lightx2v/tags)
**\[ [English](README.md) | [中文](README_zh.md) | 日本語 \]**
</div>
--------------------------------------------------------------------------------
**LightX2V** は、複数の先進的なビデオ生成推論技術を組み合わせた 軽量ビデオ生成推論フレームワーク です。単一のプラットフォームで テキストからビデオ (T2V)、画像からビデオ (I2V) など多様な生成タスクとモデルをサポートします。**X2V は「さまざまな入力モダリティ(テキスト・画像など)をビデオに変換する」ことを意味します。**
## 💡 はじめに
詳細手順はドキュメントをご覧ください:**[English Docs](https://lightx2v-en.readthedocs.io/en/latest/)** | **[中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)**
## 🤖 対応モデル一覧
-[HunyuanVideo-T2V](https://huggingface.co/tencent/HunyuanVideo)
-[HunyuanVideo-I2V](https://huggingface.co/tencent/HunyuanVideo-I2V)
-[Wan 2.1-T2V](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B)
-[Wan 2.1-I2V](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)
-[Wan 2.1-T2V-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill)
-[Wan 2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid)
-[SkyReels-V2-DF](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P)
-[CogVideoX 1.5-5B-T2V](https://huggingface.co/THUDM/CogVideoX1.5-5B)
## 🧾 コントリビューションガイドライン
プロジェクト全体でコードフォーマットを統一するため、`pre-commit` フックを用意しています。
> [!Tip]
> 1. 依存パッケージをインストール
> ```bash
> pip install ruff pre-commit
> ```
> 2. コミット前に実行
> ```bash
> pre-commit run --all-files
> ```
ご協力ありがとうございます!
## 🤝 謝辞
本リポジトリの実装は、上記すべてのモデル関連リポジトリを参考にしています。
## 🌟 Star 推移
[![Star History Chart](https://api.star-history.com/svg?repos=ModelTC/lightx2v&type=Timeline)](https://star-history.com/#ModelTC/lightx2v&Timeline)
## ✏️ 引用
本フレームワークが研究に役立った場合は、以下を引用してください。
```bibtex
@misc{lightx2v,
author = {lightx2v contributors},
title = {LightX2V: Light Video Generation Inference Framework},
year = {2024},
publisher = {GitHub},
howpublished = {\url{https://github.com/ModelTC/lightx2v}},
}
<div align="center" style="font-family: charter;">
<h1>⚡️ LightX2V:<br>轻量级视频生成推理框架</h1>
<img alt="logo" src="assets/img_lightx2v.png" width=75%></img>
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v)
[![GitHub Stars](https://img.shields.io/github/stars/ModelTC/lightx2v.svg?style=social&label=Star&maxAge=60)](https://github.com/ModelTC/lightx2v)
![visitors](https://komarev.com/ghpvc/?username=lightx2v&label=visitors)
[![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest)
[![Doc](https://img.shields.io/badge/文档-中文-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest)
[![Docker](https://badgen.net/badge/icon/docker?icon=docker&label)](https://hub.docker.com/r/lightx2v/lightx2v/tags)
**\[ [English](README.md) | 中文 | [日本語](README_ja.md) \]**
</div>
**LightX2V** 是一款轻量级视频生成推理框架,集成多种先进的视频生成推理技术,统一支持 文本生成视频 (T2V)、图像生成视频 (I2V) 等多种生成任务及模型。**“X2V” 表示将不同输入模态(文本、图像等)转换为视频输出。**
## 💡 快速开始
请参考文档:**[English Docs](https://lightx2v-en.readthedocs.io/en/latest/)** | **[中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)**
## 🤖 支持的模型列表
-[HunyuanVideo-T2V](https://huggingface.co/tencent/HunyuanVideo)
-[HunyuanVideo-I2V](https://huggingface.co/tencent/HunyuanVideo-I2V)
-[Wan2.1-T2V](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B)
-[Wan2.1-I2V](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)
-[Wan2.1-T2V-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill)
-[Wan2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid)
-[SkyReels-V2-DF](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P)
-[CogVideoX1.5-5B-T2V](https://huggingface.co/THUDM/CogVideoX1.5-5B)
## 🧾 贡献指南
我们使用 `pre-commit` 统一代码格式。
> [!Tip]
> - 下载需要的依赖:
>
> ```shell
> pip install ruff pre-commit
>```
>
> - 然后,再提交前运行下述指令:
>
> ```shell
> pre-commit run --all-files
>```
欢迎贡献!
## 🤝 致谢
本仓库实现参考了以上列出的所有模型对应的代码仓库。
## 🌟 Star 记录
[![Star History Chart](https://api.star-history.com/svg?repos=ModelTC/lightx2v&type=Timeline)](https://star-history.com/#ModelTC/lightx2v&Timeline)
## ✏️ 引用
如果您觉得本框架对您的研究有帮助,请引用:
```bibtex
@misc{lightx2v,
author = {lightx2v contributors},
title = {LightX2V: Light Video Generation Inference Framework},
year = {2024},
publisher = {GitHub},
howpublished = {\url{https://github.com/ModelTC/lightx2v}},
}
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment