Commit 6300548c authored by helloyongyang's avatar helloyongyang
Browse files

update doc

parent 64c017df
...@@ -9,13 +9,26 @@ Welcome to Lightx2v! ...@@ -9,13 +9,26 @@ Welcome to Lightx2v!
.. raw:: html .. raw:: html
<p style="text-align:center"> <div align="center" style="font-family: charter;">
<strong>A Light Video Generation Inference Framework
</strong>
<a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License"></a>
<a href="https://deepwiki.com/ModelTC/lightx2v"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
<a href="https://lightx2v-en.readthedocs.io/en/latest"><img src="https://img.shields.io/badge/docs-English-99cc2" alt="Doc"></a>
<a href="https://lightx2v-zhcn.readthedocs.io/zh-cn/latest"><img src="https://img.shields.io/badge/文档-中文-99cc2" alt="Doc"></a>
<a href="https://hub.docker.com/r/lightx2v/lightx2v/tags"><img src="https://badgen.net/badge/icon/docker?icon=docker&label" alt="Docker"></a>
</div>
<div align="center" style="font-family: charter;">
<strong>LightX2V: Light Video Generation Inference Framework</strong>
</div>
LightX2V is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. X2V means transforming different input modalities (such as text or images) to video output. LightX2V is a lightweight video generation inference framework designed to provide an inference tool that leverages multiple advanced video generation inference techniques. As a unified inference platform, this framework supports various generation tasks such as text-to-video (T2V) and image-to-video (I2V) across different models. X2V means transforming different input modalities (such as text or images) to video output.
GitHub: https://github.com/ModelTC/lightx2v
HuggingFace: https://huggingface.co/lightx2v
Documentation Documentation
------------- -------------
......
...@@ -9,13 +9,27 @@ ...@@ -9,13 +9,27 @@
.. raw:: html .. raw:: html
<p style="text-align:center"> <div align="center" style="font-family: charter;">
<strong>一个轻量级的视频生成推理框架
</strong> <a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License"></a>
<a href="https://deepwiki.com/ModelTC/lightx2v"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
<a href="https://lightx2v-en.readthedocs.io/en/latest"><img src="https://img.shields.io/badge/docs-English-99cc2" alt="Doc"></a>
<a href="https://lightx2v-zhcn.readthedocs.io/zh-cn/latest"><img src="https://img.shields.io/badge/文档-中文-99cc2" alt="Doc"></a>
<a href="https://hub.docker.com/r/lightx2v/lightx2v/tags"><img src="https://badgen.net/badge/icon/docker?icon=docker&label" alt="Docker"></a>
</div>
<div align="center" style="font-family: charter;">
<strong>LightX2V: 一个轻量级的视频生成推理框架</strong>
</div>
LightX2V 是一个轻量级的视频生成推理框架,旨在提供一个利用多种先进的视频生成推理技术的推理工具。该框架作为统一的推理平台,支持不同模型的文本到视频(T2V)和图像到视频(I2V)等生成任务。X2V 表示将不同的输入模态(X,如文本或图像)转换(to)为视频输出(V)。 LightX2V 是一个轻量级的视频生成推理框架,旨在提供一个利用多种先进的视频生成推理技术的推理工具。该框架作为统一的推理平台,支持不同模型的文本到视频(T2V)和图像到视频(I2V)等生成任务。X2V 表示将不同的输入模态(X,如文本或图像)转换(to)为视频输出(V)。
GitHub: https://github.com/ModelTC/lightx2v
HuggingFace: https://huggingface.co/lightx2v
文档列表 文档列表
------------- -------------
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment