@@ -19,22 +19,6 @@ This document provides detailed instructions for deploying LightX2V locally on W
-**CUDA**: 12.4 or higher version
-**Dependencies**: Refer to LightX2V project's requirements_win.txt
### Installation Steps
1.**Clone Project**
```cmd
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V
```
2.**Install Dependencies**
```cmd
pip install -r requirements_win.txt
```
3.**Download Models**
Refer to [Model Download Guide](../getting_started/quickstart.md) to download required models
## 🎯 Usage Methods
### Method 1: Using Batch File Inference
...
...
@@ -113,42 +97,4 @@ Double-click to run the `start_lightx2v.bat` file, the script will:
### Method 3: Using ComfyUI Inference
This guide will instruct you on how to download and use the portable version of the Lightx2v-ComfyUI environment, so you can avoid manual environment configuration steps. This is suitable for users who want to quickly start experiencing accelerated video generation with Lightx2v on Windows systems.
The portable environment already packages all Python runtime dependencies, including the code and dependencies for ComfyUI and LightX2V. After downloading, simply extract to use.
After extraction, the directory structure is as follows:
└── run_nvidia_gpu.bat # Windows startup script (double-click to start)
```
#### Start ComfyUI
Directly double-click the run_nvidia_gpu.bat file. The system will open a Command Prompt window and run the program. The first startup may take a while, please be patient. After startup is complete, the browser will automatically open and display the ComfyUI frontend interface.

The plugin used by LightX2V-ComfyUI is [ComfyUI-Lightx2vWrapper](https://github.com/ModelTC/ComfyUI-Lightx2vWrapper). Example workflows can be obtained from this project.
#### Tested Graphics Cards (offload mode)
- Tested model: `Wan2.1-I2V-14B-480P`
| GPU Model | Task Type | VRAM Capacity | Actual Max VRAM Usage | Actual Max RAM Usage |