Unverified Commit 9a116d54 authored by gushiqiao's avatar gushiqiao Committed by GitHub
Browse files

Delete lightx2v.egg-info directory (#496)

parent 75a56623
Metadata-Version: 2.4
Name: lightx2v
Version: 0.1.0
Summary: LightX2V: Light Video Generation Inference Framework
Author: LightX2V Contributors
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/ModelTC/LightX2V
Project-URL: Documentation, https://lightx2v-en.readthedocs.io/en/latest/
Project-URL: Repository, https://github.com/ModelTC/LightX2V
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Multimedia :: Video
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: numpy
Requires-Dist: scipy
Requires-Dist: torch<=2.8.0
Requires-Dist: torchvision<=0.23.0
Requires-Dist: torchaudio<=2.8.0
Requires-Dist: diffusers
Requires-Dist: transformers
Requires-Dist: tokenizers
Requires-Dist: tqdm
Requires-Dist: accelerate
Requires-Dist: safetensors
Requires-Dist: opencv-python
Requires-Dist: imageio
Requires-Dist: imageio-ffmpeg
Requires-Dist: einops
Requires-Dist: loguru
Requires-Dist: qtorch
Requires-Dist: ftfy
Requires-Dist: gradio
Requires-Dist: aiohttp
Requires-Dist: pydantic
Requires-Dist: prometheus-client
Requires-Dist: gguf
Requires-Dist: fastapi
Requires-Dist: uvicorn
Requires-Dist: PyJWT
Requires-Dist: requests
Requires-Dist: aio-pika
Requires-Dist: asyncpg>=0.27.0
Requires-Dist: aioboto3>=12.0.0
Requires-Dist: alibabacloud_dypnsapi20170525==1.2.2
Requires-Dist: redis==6.4.0
Requires-Dist: tos
Requires-Dist: decord
Requires-Dist: av
<div align="center" style="font-family: charter;">
<h1>⚡️ LightX2V:<br> Light Video Generation Inference Framework</h1>
<img alt="logo" src="assets/img_lightx2v.png" width=75%></img>
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ModelTC/lightx2v)
[![Doc](https://img.shields.io/badge/docs-English-99cc2)](https://lightx2v-en.readthedocs.io/en/latest)
[![Doc](https://img.shields.io/badge/文档-中文-99cc2)](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest)
[![Papers](https://img.shields.io/badge/论文集-中文-99cc2)](https://lightx2v-papers-zhcn.readthedocs.io/zh-cn/latest)
[![Docker](https://img.shields.io/badge/Docker-2496ED?style=flat&logo=docker&logoColor=white)](https://hub.docker.com/r/lightx2v/lightx2v/tags)
**\[ English | [中文](README_zh.md) \]**
</div>
--------------------------------------------------------------------------------
**LightX2V** is an advanced lightweight video generation inference framework engineered to deliver efficient, high-performance video synthesis solutions. This unified platform integrates multiple state-of-the-art video generation techniques, supporting diverse generation tasks including text-to-video (T2V) and image-to-video (I2V). **X2V represents the transformation of different input modalities (X, such as text or images) into video output (V)**.
## :fire: Latest News
- **November 21, 2025:** 🚀 We support the [HunyuanVideo-1.5](https://huggingface.co/tencent/HunyuanVideo-1.5) video generation model since Day 0. With the same number of GPUs, LightX2V can achieve a speed improvement of over 2 times and supports deployment on GPUs with lower memory (such as the 24GB RTX 4090). It also supports CFG/Ulysses parallelism, efficient offloading, TeaCache/MagCache technologies, and more. We will soon update our models on our [HuggingFace page](https://huggingface.co/lightx2v), including quantization, step distillation, VAE distillation, and other related models. Refer to [this](https://github.com/ModelTC/LightX2V/tree/main/scripts/hunyuan_video_15) for usage tutorials.
## 💡 Quick Start
For comprehensive usage instructions, please refer to our documentation: **[English Docs](https://lightx2v-en.readthedocs.io/en/latest/) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/)**
### Installation from Git
```bash
pip install -v git+https://github.com/ModelTC/LightX2V.git
```
### Building from Source
```bash
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V
uv pip install -v . # pip install -v .
```
### (Optional) Install Attention Operators
For attention operators installation, please refer to our documentation: **[English Docs](https://lightx2v-en.readthedocs.io/en/latest/getting_started/quickstart.html#step-4-install-attention-operators) | [中文文档](https://lightx2v-zhcn.readthedocs.io/zh-cn/latest/getting_started/quickstart.html#id9)**
### Quick Start
```python
# examples/hunyuan_video/hunyuan_t2v.py
from lightx2v import LightX2VPipeline
pipe = LightX2VPipeline(
model_path="/path/to/ckpts/hunyuanvideo-1.5/",
model_cls="hunyuan_video_1.5",
transformer_model_name="720p_t2v",
task="t2v",
)
pipe.create_generator(
attn_mode="sage_attn2",
infer_steps=50,
num_frames=121,
guidance_scale=6.0,
sample_shift=9.0,
aspect_ratio="16:9",
fps=24,
)
seed = 123
prompt = "A close-up shot captures a scene on a polished, light-colored granite kitchen counter, illuminated by soft natural light from an unseen window."
negative_prompt = ""
save_result_path="/path/to/save_results/output.mp4"
pipe.generate(
seed=seed,
prompt=prompt,
negative_prompt=negative_prompt,
save_result_path=save_result_path,
)
```
## 🤖 Supported Model Ecosystem
### Official Open-Source Models
- ✅ [HunyuanVideo-1.5](https://huggingface.co/tencent/HunyuanVideo-1.5)
- ✅ [Wan2.1 & Wan2.2](https://huggingface.co/Wan-AI/)
- ✅ [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image)
- ✅ [Qwen-Image-Edit](https://huggingface.co/spaces/Qwen/Qwen-Image-Edit)
- ✅ [Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509)
### Quantized and Distilled Models/LoRAs (**🚀 Recommended: 4-step inference**)
- ✅ [Hy1.5-Quantized-Models](https://huggingface.co/lightx2v/Hy1.5-Quantized-Models)
- ✅ [Wan2.1-Distill-Models](https://huggingface.co/lightx2v/Wan2.1-Distill-Models)
- ✅ [Wan2.2-Distill-Models](https://huggingface.co/lightx2v/Wan2.2-Distill-Models)
- ✅ [Wan2.1-Distill-Loras](https://huggingface.co/lightx2v/Wan2.1-Distill-Loras)
- ✅ [Wan2.2-Distill-Loras](https://huggingface.co/lightx2v/Wan2.2-Distill-Loras)
### Lightweight Autoencoder Models (**🚀 Recommended: fast inference & low memory usage**)
- ✅ [Autoencoders](https://huggingface.co/lightx2v/Autoencoders)
### Autoregressive Models
- ✅ [Wan2.1-T2V-CausVid](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid)
- ✅ [Self-Forcing](https://github.com/guandeh17/Self-Forcing)
- ✅ [Matrix-Game-2.0](https://huggingface.co/Skywork/Matrix-Game-2.0)
🔔 Follow our [HuggingFace page](https://huggingface.co/lightx2v) for the latest model releases from our team.
💡 Refer to the [Model Structure Documentation](https://lightx2v-en.readthedocs.io/en/latest/getting_started/model_structure.html) to quickly get started with LightX2V
## 🚀 Frontend Interfaces
We provide multiple frontend interface deployment options:
- **🎨 Gradio Interface**: Clean and user-friendly web interface, perfect for quick experience and prototyping
- 📖 [Gradio Deployment Guide](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_gradio.html)
- **🎯 ComfyUI Interface**: Powerful node-based workflow interface, supporting complex video generation tasks
- 📖 [ComfyUI Deployment Guide](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_comfyui.html)
- **🚀 Windows One-Click Deployment**: Convenient deployment solution designed for Windows users, featuring automatic environment configuration and intelligent parameter optimization
- 📖 [Windows One-Click Deployment Guide](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_local_windows.html)
**💡 Recommended Solutions**:
- **First-time Users**: We recommend the Windows one-click deployment solution
- **Advanced Users**: We recommend the ComfyUI interface for more customization options
- **Quick Experience**: The Gradio interface provides the most intuitive operation experience
## 🚀 Core Features
### 🎯 **Ultimate Performance Optimization**
- **🔥 SOTA Inference Speed**: Achieve **~20x** acceleration via step distillation and system optimization (single GPU)
- **⚡️ Revolutionary 4-Step Distillation**: Compress original 40-50 step inference to just 4 steps without CFG requirements
- **🛠️ Advanced Operator Support**: Integrated with cutting-edge operators including [Sage Attention](https://github.com/thu-ml/SageAttention), [Flash Attention](https://github.com/Dao-AILab/flash-attention), [Radial Attention](https://github.com/mit-han-lab/radial-attention), [q8-kernel](https://github.com/KONAKONA666/q8_kernels), [sgl-kernel](https://github.com/sgl-project/sglang/tree/main/sgl-kernel), [vllm](https://github.com/vllm-project/vllm)
### 💾 **Resource-Efficient Deployment**
- **💡 Breaking Hardware Barriers**: Run 14B models for 480P/720P video generation with only **8GB VRAM + 16GB RAM**
- **🔧 Intelligent Parameter Offloading**: Advanced disk-CPU-GPU three-tier offloading architecture with phase/block-level granular management
- **⚙️ Comprehensive Quantization**: Support for `w8a8-int8`, `w8a8-fp8`, `w4a4-nvfp4` and other quantization strategies
### 🎨 **Rich Feature Ecosystem**
- **📈 Smart Feature Caching**: Intelligent caching mechanisms to eliminate redundant computations
- **🔄 Parallel Inference**: Multi-GPU parallel processing for enhanced performance
- **📱 Flexible Deployment Options**: Support for Gradio, service deployment, ComfyUI and other deployment methods
- **🎛️ Dynamic Resolution Inference**: Adaptive resolution adjustment for optimal generation quality
- **🎞️ Video Frame Interpolation**: RIFE-based frame interpolation for smooth frame rate enhancement
## 🏆 Performance Benchmarks
For detailed performance metrics and comparisons, please refer to our [benchmark documentation](https://github.com/ModelTC/LightX2V/blob/main/docs/EN/source/getting_started/benchmark_source.md).
[Detailed Service Deployment Guide →](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_service.html)
## 📚 Technical Documentation
### 📖 **Method Tutorials**
- [Model Quantization](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/quantization.html) - Comprehensive guide to quantization strategies
- [Feature Caching](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/cache.html) - Intelligent caching mechanisms
- [Attention Mechanisms](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/attention.html) - State-of-the-art attention operators
- [Parameter Offloading](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/offload.html) - Three-tier storage architecture
- [Parallel Inference](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/parallel.html) - Multi-GPU acceleration strategies
- [Changing Resolution Inference](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/changing_resolution.html) - U-shaped resolution strategy
- [Step Distillation](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/step_distill.html) - 4-step inference technology
- [Video Frame Interpolation](https://lightx2v-en.readthedocs.io/en/latest/method_tutorials/video_frame_interpolation.html) - Base on the RIFE technology
### 🛠️ **Deployment Guides**
- [Low-Resource Deployment](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/for_low_resource.html) - Optimized 8GB VRAM solutions
- [Low-Latency Deployment](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/for_low_latency.html) - Ultra-fast inference optimization
- [Gradio Deployment](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_gradio.html) - Web interface setup
- [Service Deployment](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/deploy_service.html) - Production API service deployment
- [Lora Model Deployment](https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/lora_deploy.html) - Flexible Lora deployment
## 🧾 Contributing Guidelines
We maintain code quality through automated pre-commit hooks to ensure consistent formatting across the project.
> [!TIP]
> **Setup Instructions:**
>
> 1. Install required dependencies:
> ```shell
> pip install ruff pre-commit
> ```
>
> 2. Run before committing:
> ```shell
> pre-commit run --all-files
> ```
We appreciate your contributions to making LightX2V better!
## 🤝 Acknowledgments
We extend our gratitude to all the model repositories and research communities that inspired and contributed to the development of LightX2V. This framework builds upon the collective efforts of the open-source community.
## 🌟 Star History
[![Star History Chart](https://api.star-history.com/svg?repos=ModelTC/lightx2v&type=Timeline)](https://star-history.com/#ModelTC/lightx2v&Timeline)
## ✏️ Citation
If you find LightX2V useful in your research, please consider citing our work:
```bibtex
@misc{lightx2v,
author = {LightX2V Contributors},
title = {LightX2V: Light Video Generation Inference Framework},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ModelTC/lightx2v}},
}
```
## 📞 Contact & Support
For questions, suggestions, or support, please feel free to reach out through:
- 🐛 [GitHub Issues](https://github.com/ModelTC/lightx2v/issues) - Bug reports and feature requests
- 💬 [GitHub Discussions](https://github.com/ModelTC/lightx2v/discussions) - Community discussions and Q&A
---
<div align="center">
Built with ❤️ by the LightX2V team
</div>
README.md
pyproject.toml
lightx2v/__init__.py
lightx2v/infer.py
lightx2v/pipeline.py
lightx2v.egg-info/PKG-INFO
lightx2v.egg-info/SOURCES.txt
lightx2v.egg-info/dependency_links.txt
lightx2v.egg-info/requires.txt
lightx2v.egg-info/top_level.txt
lightx2v/common/__init__.py
lightx2v/common/modules/__init__.py
lightx2v/common/modules/weight_module.py
lightx2v/common/offload/manager.py
lightx2v/common/ops/__init__.py
lightx2v/common/ops/attn/__init__.py
lightx2v/common/ops/attn/flash_attn.py
lightx2v/common/ops/attn/nbhd_attn.py
lightx2v/common/ops/attn/radial_attn.py
lightx2v/common/ops/attn/ring_attn.py
lightx2v/common/ops/attn/sage_attn.py
lightx2v/common/ops/attn/spassage_attn.py
lightx2v/common/ops/attn/svg2_attn.py
lightx2v/common/ops/attn/svg2_attn_utils.py
lightx2v/common/ops/attn/svg_attn.py
lightx2v/common/ops/attn/template.py
lightx2v/common/ops/attn/torch_sdpa.py
lightx2v/common/ops/attn/ulysses_attn.py
lightx2v/common/ops/attn/utils/all2all.py
lightx2v/common/ops/attn/utils/ring_comm.py
lightx2v/common/ops/conv/__init__.py
lightx2v/common/ops/conv/conv2d.py
lightx2v/common/ops/conv/conv3d.py
lightx2v/common/ops/embedding/__init__.py
lightx2v/common/ops/embedding/embedding_weight.py
lightx2v/common/ops/mm/__init__.py
lightx2v/common/ops/mm/mm_weight.py
lightx2v/common/ops/norm/__init__.py
lightx2v/common/ops/norm/layer_norm_weight.py
lightx2v/common/ops/norm/rms_norm_weight.py
lightx2v/common/ops/norm/triton_ops.py
lightx2v/common/ops/tensor/__init__.py
lightx2v/common/ops/tensor/tensor.py
lightx2v/common/transformer_infer/transformer_infer.py
lightx2v/deploy/__init__.py
lightx2v/deploy/common/__init__.py
lightx2v/deploy/common/aliyun.py
lightx2v/deploy/common/pipeline.py
lightx2v/deploy/common/utils.py
lightx2v/deploy/common/va_reader.py
lightx2v/deploy/common/va_recorder.py
lightx2v/deploy/common/va_recorder_x264.py
lightx2v/deploy/common/volcengine_tts.py
lightx2v/deploy/data_manager/__init__.py
lightx2v/deploy/data_manager/local_data_manager.py
lightx2v/deploy/data_manager/s3_data_manager.py
lightx2v/deploy/queue_manager/__init__.py
lightx2v/deploy/queue_manager/local_queue_manager.py
lightx2v/deploy/queue_manager/rabbitmq_queue_manager.py
lightx2v/deploy/server/__init__.py
lightx2v/deploy/server/__main__.py
lightx2v/deploy/server/auth.py
lightx2v/deploy/server/metrics.py
lightx2v/deploy/server/monitor.py
lightx2v/deploy/server/redis_client.py
lightx2v/deploy/server/redis_monitor.py
lightx2v/deploy/task_manager/__init__.py
lightx2v/deploy/task_manager/local_task_manager.py
lightx2v/deploy/task_manager/sql_task_manager.py
lightx2v/deploy/worker/__init__.py
lightx2v/deploy/worker/__main__.py
lightx2v/deploy/worker/hub.py
lightx2v/models/__init__.py
lightx2v/models/input_encoders/__init__.py
lightx2v/models/input_encoders/hf/__init__.py
lightx2v/models/input_encoders/hf/q_linear.py
lightx2v/models/input_encoders/hf/animate/__init__.py
lightx2v/models/input_encoders/hf/animate/face_encoder.py
lightx2v/models/input_encoders/hf/animate/motion_encoder.py
lightx2v/models/input_encoders/hf/hunyuan15/byt5/__init__.py
lightx2v/models/input_encoders/hf/hunyuan15/byt5/format_prompt.py
lightx2v/models/input_encoders/hf/hunyuan15/byt5/model.py
lightx2v/models/input_encoders/hf/hunyuan15/qwen25/__init__.py
lightx2v/models/input_encoders/hf/hunyuan15/qwen25/model.py
lightx2v/models/input_encoders/hf/hunyuan15/siglip/__init__.py
lightx2v/models/input_encoders/hf/hunyuan15/siglip/model.py
lightx2v/models/input_encoders/hf/qwen25/qwen25_vlforconditionalgeneration.py
lightx2v/models/input_encoders/hf/seko_audio/audio_adapter.py
lightx2v/models/input_encoders/hf/seko_audio/audio_encoder.py
lightx2v/models/input_encoders/hf/vace/vace_processor.py
lightx2v/models/input_encoders/hf/wan/matrix_game2/__init__.py
lightx2v/models/input_encoders/hf/wan/matrix_game2/clip.py
lightx2v/models/input_encoders/hf/wan/matrix_game2/conditions.py
lightx2v/models/input_encoders/hf/wan/matrix_game2/tokenizers.py
lightx2v/models/input_encoders/hf/wan/t5/__init__.py
lightx2v/models/input_encoders/hf/wan/t5/model.py
lightx2v/models/input_encoders/hf/wan/t5/tokenizer.py
lightx2v/models/input_encoders/hf/wan/xlm_roberta/__init__.py
lightx2v/models/input_encoders/hf/wan/xlm_roberta/model.py
lightx2v/models/networks/__init__.py
lightx2v/models/networks/hunyuan_video/__init__.py
lightx2v/models/networks/hunyuan_video/model.py
lightx2v/models/networks/hunyuan_video/infer/attn_no_pad.py
lightx2v/models/networks/hunyuan_video/infer/module_io.py
lightx2v/models/networks/hunyuan_video/infer/post_infer.py
lightx2v/models/networks/hunyuan_video/infer/pre_infer.py
lightx2v/models/networks/hunyuan_video/infer/transformer_infer.py
lightx2v/models/networks/hunyuan_video/infer/triton_ops.py
lightx2v/models/networks/hunyuan_video/infer/feature_caching/__init__.py
lightx2v/models/networks/hunyuan_video/infer/feature_caching/transformer_infer.py
lightx2v/models/networks/hunyuan_video/infer/offload/__init__.py
lightx2v/models/networks/hunyuan_video/infer/offload/transformer_infer.py
lightx2v/models/networks/hunyuan_video/weights/post_weights.py
lightx2v/models/networks/hunyuan_video/weights/pre_weights.py
lightx2v/models/networks/hunyuan_video/weights/transformer_weights.py
lightx2v/models/networks/qwen_image/model.py
lightx2v/models/networks/qwen_image/infer/post_infer.py
lightx2v/models/networks/qwen_image/infer/pre_infer.py
lightx2v/models/networks/qwen_image/infer/transformer_infer.py
lightx2v/models/networks/qwen_image/infer/offload/__init__.py
lightx2v/models/networks/qwen_image/infer/offload/transformer_infer.py
lightx2v/models/networks/qwen_image/weights/post_weights.py
lightx2v/models/networks/qwen_image/weights/pre_weights.py
lightx2v/models/networks/qwen_image/weights/transformer_weights.py
lightx2v/models/networks/wan/animate_model.py
lightx2v/models/networks/wan/audio_model.py
lightx2v/models/networks/wan/causvid_model.py
lightx2v/models/networks/wan/distill_model.py
lightx2v/models/networks/wan/lora_adapter.py
lightx2v/models/networks/wan/matrix_game2_model.py
lightx2v/models/networks/wan/model.py
lightx2v/models/networks/wan/sf_model.py
lightx2v/models/networks/wan/vace_model.py
lightx2v/models/networks/wan/infer/module_io.py
lightx2v/models/networks/wan/infer/post_infer.py
lightx2v/models/networks/wan/infer/pre_infer.py
lightx2v/models/networks/wan/infer/transformer_infer.py
lightx2v/models/networks/wan/infer/utils.py
lightx2v/models/networks/wan/infer/animate/pre_infer.py
lightx2v/models/networks/wan/infer/animate/transformer_infer.py
lightx2v/models/networks/wan/infer/audio/post_infer.py
lightx2v/models/networks/wan/infer/audio/pre_infer.py
lightx2v/models/networks/wan/infer/audio/transformer_infer.py
lightx2v/models/networks/wan/infer/causvid/__init__.py
lightx2v/models/networks/wan/infer/causvid/transformer_infer.py
lightx2v/models/networks/wan/infer/feature_caching/__init__.py
lightx2v/models/networks/wan/infer/feature_caching/transformer_infer.py
lightx2v/models/networks/wan/infer/matrix_game2/posemb_layers.py
lightx2v/models/networks/wan/infer/matrix_game2/pre_infer.py
lightx2v/models/networks/wan/infer/matrix_game2/transformer_infer.py
lightx2v/models/networks/wan/infer/offload/__init__.py
lightx2v/models/networks/wan/infer/offload/transformer_infer.py
lightx2v/models/networks/wan/infer/self_forcing/__init__.py
lightx2v/models/networks/wan/infer/self_forcing/pre_infer.py
lightx2v/models/networks/wan/infer/self_forcing/transformer_infer.py
lightx2v/models/networks/wan/infer/vace/transformer_infer.py
lightx2v/models/networks/wan/weights/post_weights.py
lightx2v/models/networks/wan/weights/pre_weights.py
lightx2v/models/networks/wan/weights/transformer_weights.py
lightx2v/models/networks/wan/weights/animate/transformer_weights.py
lightx2v/models/networks/wan/weights/audio/transformer_weights.py
lightx2v/models/networks/wan/weights/matrix_game2/pre_weights.py
lightx2v/models/networks/wan/weights/matrix_game2/transformer_weights.py
lightx2v/models/networks/wan/weights/vace/transformer_weights.py
lightx2v/models/runners/__init__.py
lightx2v/models/runners/base_runner.py
lightx2v/models/runners/default_runner.py
lightx2v/models/runners/hunyuan_video/hunyuan_video_15_runner.py
lightx2v/models/runners/qwen_image/qwen_image_runner.py
lightx2v/models/runners/vsr/vsr_wrapper.py
lightx2v/models/runners/vsr/vsr_wrapper_hy15.py
lightx2v/models/runners/vsr/utils/TCDecoder.py
lightx2v/models/runners/vsr/utils/utils.py
lightx2v/models/runners/wan/__init__.py
lightx2v/models/runners/wan/wan_animate_runner.py
lightx2v/models/runners/wan/wan_audio_runner.py
lightx2v/models/runners/wan/wan_distill_runner.py
lightx2v/models/runners/wan/wan_matrix_game2_runner.py
lightx2v/models/runners/wan/wan_runner.py
lightx2v/models/runners/wan/wan_sf_runner.py
lightx2v/models/runners/wan/wan_vace_runner.py
lightx2v/models/schedulers/__init__.py
lightx2v/models/schedulers/scheduler.py
lightx2v/models/schedulers/hunyuan_video/__init__.py
lightx2v/models/schedulers/hunyuan_video/posemb_layers.py
lightx2v/models/schedulers/hunyuan_video/scheduler.py
lightx2v/models/schedulers/hunyuan_video/feature_caching/__init__.py
lightx2v/models/schedulers/hunyuan_video/feature_caching/scheduler.py
lightx2v/models/schedulers/qwen_image/scheduler.py
lightx2v/models/schedulers/wan/scheduler.py
lightx2v/models/schedulers/wan/audio/scheduler.py
lightx2v/models/schedulers/wan/changing_resolution/scheduler.py
lightx2v/models/schedulers/wan/feature_caching/scheduler.py
lightx2v/models/schedulers/wan/self_forcing/scheduler.py
lightx2v/models/schedulers/wan/step_distill/scheduler.py
lightx2v/models/vfi/rife/rife_comfyui_wrapper.py
lightx2v/models/vfi/rife/model/loss.py
lightx2v/models/vfi/rife/model/warplayer.py
lightx2v/models/vfi/rife/model/pytorch_msssim/__init__.py
lightx2v/models/vfi/rife/train_log/IFNet_HDv3.py
lightx2v/models/vfi/rife/train_log/RIFE_HDv3.py
lightx2v/models/vfi/rife/train_log/refine.py
lightx2v/models/video_encoders/__init__.py
lightx2v/models/video_encoders/hf/__init__.py
lightx2v/models/video_encoders/hf/tae.py
lightx2v/models/video_encoders/hf/vid_recon.py
lightx2v/models/video_encoders/hf/hunyuanvideo15/__init__.py
lightx2v/models/video_encoders/hf/hunyuanvideo15/hunyuanvideo_15_vae.py
lightx2v/models/video_encoders/hf/hunyuanvideo15/lighttae_hy15.py
lightx2v/models/video_encoders/hf/qwen_image/__init__.py
lightx2v/models/video_encoders/hf/qwen_image/vae.py
lightx2v/models/video_encoders/hf/wan/__init__.py
lightx2v/models/video_encoders/hf/wan/vae.py
lightx2v/models/video_encoders/hf/wan/vae_2_2.py
lightx2v/models/video_encoders/hf/wan/vae_sf.py
lightx2v/models/video_encoders/hf/wan/vae_tiny.py
lightx2v/server/__init__.py
lightx2v/server/__main__.py
lightx2v/server/api.py
lightx2v/server/audio_utils.py
lightx2v/server/config.py
lightx2v/server/distributed_utils.py
lightx2v/server/image_utils.py
lightx2v/server/main.py
lightx2v/server/run_server.py
lightx2v/server/schema.py
lightx2v/server/service.py
lightx2v/server/task_manager.py
lightx2v/server/metrics/__init__.py
lightx2v/server/metrics/metrics.py
lightx2v/server/metrics/monitor.py
lightx2v/utils/__init__.py
lightx2v/utils/async_io.py
lightx2v/utils/custom_compiler.py
lightx2v/utils/envs.py
lightx2v/utils/generate_task_id.py
lightx2v/utils/global_paras.py
lightx2v/utils/input_info.py
lightx2v/utils/lockable_dict.py
lightx2v/utils/memory_profiler.py
lightx2v/utils/print_atten_score.py
lightx2v/utils/profiler.py
lightx2v/utils/prompt_enhancer.py
lightx2v/utils/quant_utils.py
lightx2v/utils/registry_factory.py
lightx2v/utils/service_utils.py
lightx2v/utils/set_config.py
lightx2v/utils/utils.py
numpy
scipy
torch<=2.8.0
torchvision<=0.23.0
torchaudio<=2.8.0
diffusers
transformers
tokenizers
tqdm
accelerate
safetensors
opencv-python
imageio
imageio-ffmpeg
einops
loguru
qtorch
ftfy
gradio
aiohttp
pydantic
prometheus-client
gguf
fastapi
uvicorn
PyJWT
requests
aio-pika
asyncpg>=0.27.0
aioboto3>=12.0.0
alibabacloud_dypnsapi20170525==1.2.2
redis==6.4.0
tos
decord
av
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment