quickstart.md 7.92 KB
Newer Older
gushiqiao's avatar
gushiqiao committed
1
# LightX2V Quick Start Guide
helloyongyang's avatar
helloyongyang committed
2

gushiqiao's avatar
gushiqiao committed
3
Welcome to LightX2V! This guide will help you quickly set up the environment and start using LightX2V for video generation.
4

gushiqiao's avatar
gushiqiao committed
5
## 📋 Table of Contents
6

gushiqiao's avatar
gushiqiao committed
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
- [System Requirements](#system-requirements)
- [Linux Environment Setup](#linux-environment-setup)
  - [Docker Environment (Recommended)](#docker-environment-recommended)
  - [Conda Environment Setup](#conda-environment-setup)
- [Windows Environment Setup](#windows-environment-setup)
- [Inference Usage](#inference-usage)

## 🚀 System Requirements

- **Operating System**: Linux (Ubuntu 18.04+) or Windows 10/11
- **Python**: 3.10 or higher
- **GPU**: NVIDIA GPU with CUDA support, at least 8GB VRAM
- **Memory**: 16GB or more recommended
- **Storage**: At least 50GB available space

## 🐧 Linux Environment Setup

### 🐳 Docker Environment (Recommended)

We strongly recommend using the Docker environment, which is the simplest and fastest installation method.

#### 1. Pull Image

helloyongyang's avatar
helloyongyang committed
30
Visit LightX2V's [Docker Hub](https://hub.docker.com/r/lightx2v/lightx2v/tags) and select a tag with the latest date, such as `25080104`:
gushiqiao's avatar
gushiqiao committed
31
32
33

```bash
# Pull the latest version of LightX2V image
helloyongyang's avatar
helloyongyang committed
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
docker pull lightx2v/lightx2v:25080104
```

If you need to use `SageAttention`, you can use docker image versions with the `-SageSmXX` suffix. The use of `SageAttention` requires selection based on GPU type, where:

1. A100: -SageSm80
2. RTX30 series: -SageSm86
3. RTX40 series: -SageSm89
4. H100: -SageSm90
5. RTX50 series: -SageSm120

For example, to use `SageAttention` on 4090 or H100, the docker image pull command would be:

```bash
# For 4090
docker pull lightx2v/lightx2v:25080104-SageSm89
# For H100
docker pull lightx2v/lightx2v:25080104-SageSm90
52
53
```

gushiqiao's avatar
gushiqiao committed
54
55
56
57
58
59
60
61
#### 2. Run Container

```bash
docker run --gpus all -itd --ipc=host --name [container_name] -v [mount_settings] --entrypoint /bin/bash [image_id]
```

#### 3. Domestic Mirror Source (Optional)

helloyongyang's avatar
helloyongyang committed
62
For users in mainland China, if the network is unstable when pulling images, you can pull from Aliyun:
gushiqiao's avatar
gushiqiao committed
63
64

```bash
helloyongyang's avatar
helloyongyang committed
65
66
67
68
69
70
71
72
73
74
75
# Replace [tag] with the desired image tag to download
docker pull registry.cn-hangzhou.aliyuncs.com/yongyang/lightx2v:[tag]

# For example, download 25080104
docker pull registry.cn-hangzhou.aliyuncs.com/yongyang/lightx2v:25080104

# For example, download 25080104-SageSm89
docker pull registry.cn-hangzhou.aliyuncs.com/yongyang/lightx2v:25080104-SageSm89

# For example, download 25080104-SageSm90
docker pull registry.cn-hangzhou.aliyuncs.com/yongyang/lightx2v:25080104-SageSm90
gushiqiao's avatar
gushiqiao committed
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
```

### 🐍 Conda Environment Setup

If you prefer to set up the environment yourself using Conda, please follow these steps:

#### Step 1: Clone Repository

```bash
# Download project code
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V
```

#### Step 2: Create Conda Virtual Environment

```bash
# Create and activate conda environment
conda create -n lightx2v python=3.12 -y
conda activate lightx2v
```
helloyongyang's avatar
helloyongyang committed
97

gushiqiao's avatar
gushiqiao committed
98
#### Step 3: Install Dependencies
99

gushiqiao's avatar
gushiqiao committed
100
101
```bash
# Install basic dependencies
102
pip install -r requirements.txt
gushiqiao's avatar
gushiqiao committed
103
104
105
```

> 💡 **Note**: The Hunyuan model needs to run under transformers version 4.45.2. If you don't need to run the Hunyuan model, you can skip the transformers version restriction.
106

gushiqiao's avatar
gushiqiao committed
107
#### Step 4: Install Attention Operators
108

gushiqiao's avatar
gushiqiao committed
109
110
**Option A: Flash Attention 2**
```bash
helloyongyang's avatar
helloyongyang committed
111
112
git clone https://github.com/Dao-AILab/flash-attention.git --recursive
cd flash-attention && python setup.py install
gushiqiao's avatar
gushiqiao committed
113
```
114

gushiqiao's avatar
gushiqiao committed
115
116
**Option B: Flash Attention 3 (for Hopper architecture GPUs)**
```bash
helloyongyang's avatar
helloyongyang committed
117
cd flash-attention/hopper && python setup.py install
118
119
```

gushiqiao's avatar
gushiqiao committed
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
**Option C: SageAttention 2 (Recommended)**
```bash
git clone https://github.com/thu-ml/SageAttention.git
cd SageAttention && python setup.py install
```

## 🪟 Windows Environment Setup

Windows systems only support Conda environment setup. Please follow these steps:

### 🐍 Conda Environment Setup

#### Step 1: Check CUDA Version

First, confirm your GPU driver and CUDA version:

```cmd
nvidia-smi
```

Record the **CUDA Version** information in the output, which needs to be consistent in subsequent installations.

#### Step 2: Create Python Environment

```cmd
# Create new environment (Python 3.12 recommended)
conda create -n lightx2v python=3.12 -y

# Activate environment
conda activate lightx2v
```

> 💡 **Note**: Python 3.10 or higher is recommended for best compatibility.

#### Step 3: Install PyTorch Framework

**Method 1: Download Official Wheel Package (Recommended)**

1. Visit the [PyTorch Official Download Page](https://download.pytorch.org/whl/torch/)
2. Select the corresponding version wheel package, paying attention to matching:
   - **Python Version**: Consistent with your environment
   - **CUDA Version**: Matches your GPU driver
   - **Platform**: Select Windows version
163

gushiqiao's avatar
gushiqiao committed
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
**Example (Python 3.12 + PyTorch 2.6 + CUDA 12.4):**

```cmd
# Download and install PyTorch
pip install torch-2.6.0+cu124-cp312-cp312-win_amd64.whl

# Install supporting packages
pip install torchvision==0.21.0 torchaudio==2.6.0
```

**Method 2: Direct Installation via pip**

```cmd
# CUDA 12.4 version example
pip install torch==2.6.0+cu124 torchvision==0.21.0+cu124 torchaudio==2.6.0+cu124 --index-url https://download.pytorch.org/whl/cu124
```

#### Step 4: Install Windows Version vLLM

Download the corresponding wheel package from [vllm-windows releases](https://github.com/SystemPanic/vllm-windows/releases).

**Version Matching Requirements:**
- Python version matching
- PyTorch version matching
- CUDA version matching

```cmd
# Install vLLM (please adjust according to actual filename)
pip install vllm-0.9.1+cu124-cp312-cp312-win_amd64.whl
```

#### Step 5: Install Attention Mechanism Operators

**Option A: Flash Attention 2**

```cmd
pip install flash-attn==2.7.2.post1
```

**Option B: SageAttention 2 (Strongly Recommended)**

**Download Sources:**
- [Windows Special Version 1](https://github.com/woct0rdho/SageAttention/releases)
- [Windows Special Version 2](https://github.com/sdbds/SageAttention-for-windows/releases)

```cmd
# Install SageAttention (please adjust according to actual filename)
pip install sageattention-2.1.1+cu126torch2.6.0-cp312-cp312-win_amd64.whl
```

> ⚠️ **Note**: SageAttention's CUDA version doesn't need to be strictly aligned, but Python and PyTorch versions must match.

#### Step 6: Clone Repository

```cmd
# Clone project code
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V

# Install Windows-specific dependencies
pip install -r requirements_win.txt
```

## 🎯 Inference Usage

### 📥 Model Preparation

Before starting inference, you need to download the model files in advance. We recommend:

- **Download Source**: Download models from [LightX2V Official Hugging Face](https://huggingface.co/lightx2v/) or other open-source model repositories
- **Storage Location**: It's recommended to store models on SSD disks for better read performance
- **Available Models**: Including Wan2.1-I2V, Wan2.1-T2V, and other models supporting different resolutions and functionalities

### 📁 Configuration Files and Scripts

The configuration files used for inference are available [here](https://github.com/ModelTC/LightX2V/tree/main/configs), and scripts are available [here](https://github.com/ModelTC/LightX2V/tree/main/scripts).

You need to configure the downloaded model path in the run script. In addition to the input arguments in the script, there are also some necessary parameters in the configuration file specified by `--config_json`. You can modify them as needed.

### 🚀 Start Inference

#### Linux Environment

```bash
# Run after modifying the path in the script
helloyongyang's avatar
helloyongyang committed
249
bash scripts/wan/run_wan_t2v.sh
250
251
```

gushiqiao's avatar
gushiqiao committed
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
#### Windows Environment

```cmd
# Use Windows batch script
scripts\win\run_wan_t2v.bat
```

## 📞 Get Help

If you encounter problems during installation or usage, please:

1. Search for related issues in [GitHub Issues](https://github.com/ModelTC/LightX2V/issues)
2. Submit a new Issue describing your problem

---

🎉 **Congratulations!** You have successfully set up the LightX2V environment and can now start enjoying video generation!