model_structure.md 20.5 KB
Newer Older
gushiqiao's avatar
gushiqiao committed
1
# 模型格式与加载指南
gushiqiao's avatar
gushiqiao committed
2
3
4

## 📖 概述

gushiqiao's avatar
gushiqiao committed
5
LightX2V 是一个灵活的视频生成推理框架,支持多种模型来源和格式,为用户提供丰富的选择:
gushiqiao's avatar
gushiqiao committed
6

gushiqiao's avatar
gushiqiao committed
7
8
9
-**Wan 官方模型**:直接兼容 Wan2.1 和 Wan2.2 官方发布的完整模型
-**单文件模型**:支持 LightX2V 发布的单文件格式模型(包含量化版本)
-**LoRA 模型**:支持加载 LightX2V 发布的蒸馏 LoRA
gushiqiao's avatar
gushiqiao committed
10

gushiqiao's avatar
gushiqiao committed
11
本文档将详细介绍各种模型格式的使用方法、配置参数和最佳实践。
gushiqiao's avatar
gushiqiao committed
12

gushiqiao's avatar
gushiqiao committed
13
---
gushiqiao's avatar
gushiqiao committed
14

gushiqiao's avatar
gushiqiao committed
15
## 🗂️ 格式一:Wan 官方模型
gushiqiao's avatar
gushiqiao committed
16

gushiqiao's avatar
gushiqiao committed
17
18
19
### 模型仓库
- [Wan2.1 Collection](https://huggingface.co/collections/Wan-AI/wan21-68ac4ba85372ae5a8e282a1b)
- [Wan2.2 Collection](https://huggingface.co/collections/Wan-AI/wan22-68ac4ae80a8b477e79636fc8)
gushiqiao's avatar
gushiqiao committed
20

gushiqiao's avatar
gushiqiao committed
21
22
23
24
25
26
27
### 模型特点
- **官方保证**:Wan-AI 官方发布的完整模型,质量最高
- **完整组件**:包含所有必需的组件(DIT、T5、CLIP、VAE)
- **原始精度**:使用 BF16/FP32 精度,无量化损失
- **兼容性强**:与 Wan 官方工具链完全兼容

### Wan2.1 官方模型
gushiqiao's avatar
gushiqiao committed
28

gushiqiao's avatar
gushiqiao committed
29
#### 目录结构
gushiqiao's avatar
gushiqiao committed
30

gushiqiao's avatar
gushiqiao committed
31
[Wan2.1-I2V-14B-720P](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P) 为例:
gushiqiao's avatar
gushiqiao committed
32
33

```
gushiqiao's avatar
gushiqiao committed
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Wan2.1-I2V-14B-720P/
├── diffusion_pytorch_model-00001-of-00007.safetensors   # DIT 模型分片 1
├── diffusion_pytorch_model-00002-of-00007.safetensors   # DIT 模型分片 2
├── diffusion_pytorch_model-00003-of-00007.safetensors   # DIT 模型分片 3
├── diffusion_pytorch_model-00004-of-00007.safetensors   # DIT 模型分片 4
├── diffusion_pytorch_model-00005-of-00007.safetensors   # DIT 模型分片 5
├── diffusion_pytorch_model-00006-of-00007.safetensors   # DIT 模型分片 6
├── diffusion_pytorch_model-00007-of-00007.safetensors   # DIT 模型分片 7
├── diffusion_pytorch_model.safetensors.index.json       # 分片索引文件
├── models_t5_umt5-xxl-enc-bf16.pth                      # T5 文本编码器
├── models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth  # CLIP 编码器
├── Wan2.1_VAE.pth                                       # VAE 编解码器
├── config.json                                          # 模型配置
├── xlm-roberta-large/                                   # CLIP tokenizer
├── google/                                              # T5 tokenizer
├── assets/
└── examples/
gushiqiao's avatar
gushiqiao committed
51
52
```

gushiqiao's avatar
gushiqiao committed
53
#### 使用方法
gushiqiao's avatar
gushiqiao committed
54
55

```bash
gushiqiao's avatar
gushiqiao committed
56
57
58
# 下载模型
huggingface-cli download Wan-AI/Wan2.1-I2V-14B-720P \
    --local-dir ./models/Wan2.1-I2V-14B-720P
gushiqiao's avatar
gushiqiao committed
59

gushiqiao's avatar
gushiqiao committed
60
61
62
# 配置启动脚本
model_path=./models/Wan2.1-I2V-14B-720P
lightx2v_path=/path/to/LightX2V
gushiqiao's avatar
gushiqiao committed
63

gushiqiao's avatar
gushiqiao committed
64
65
66
67
68
69
# 运行推理
cd LightX2V/scripts
bash wan/run_wan_i2v.sh
```

### Wan2.2 官方模型
gushiqiao's avatar
gushiqiao committed
70

gushiqiao's avatar
gushiqiao committed
71
#### 目录结构
gushiqiao's avatar
gushiqiao committed
72

gushiqiao's avatar
gushiqiao committed
73
[Wan2.2-I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) 为例:
gushiqiao's avatar
gushiqiao committed
74

gushiqiao's avatar
gushiqiao committed
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
```
Wan2.2-I2V-A14B/
├── high_noise_model/                                    # 高噪声模型目录
│   ├── diffusion_pytorch_model-00001-of-00009.safetensors
│   ├── diffusion_pytorch_model-00002-of-00009.safetensors
│   ├── ...
│   ├── diffusion_pytorch_model-00009-of-00009.safetensors
│   └── diffusion_pytorch_model.safetensors.index.json
├── low_noise_model/                                     # 低噪声模型目录
│   ├── diffusion_pytorch_model-00001-of-00009.safetensors
│   ├── diffusion_pytorch_model-00002-of-00009.safetensors
│   ├── ...
│   ├── diffusion_pytorch_model-00009-of-00009.safetensors
│   └── diffusion_pytorch_model.safetensors.index.json
├── models_t5_umt5-xxl-enc-bf16.pth                      # T5 文本编码器
├── Wan2.1_VAE.pth                                       # VAE 编解码器
├── configuration.json                                   # 模型配置
├── google/                                              # T5 tokenizer
├── assets/                                              # 示例资源(可选)
└── examples/                                            # 示例文件(可选)
```
gushiqiao's avatar
gushiqiao committed
96

gushiqiao's avatar
gushiqiao committed
97
#### 使用方法
gushiqiao's avatar
gushiqiao committed
98

gushiqiao's avatar
gushiqiao committed
99
```bash
gushiqiao's avatar
gushiqiao committed
100
101
102
# 下载模型
huggingface-cli download Wan-AI/Wan2.2-I2V-A14B \
    --local-dir ./models/Wan2.2-I2V-A14B
gushiqiao's avatar
gushiqiao committed
103

gushiqiao's avatar
gushiqiao committed
104
105
106
# 配置启动脚本
model_path=./models/Wan2.2-I2V-A14B
lightx2v_path=/path/to/LightX2V
gushiqiao's avatar
gushiqiao committed
107

gushiqiao's avatar
gushiqiao committed
108
109
110
# 运行推理
cd LightX2V/scripts
bash wan22/run_wan22_moe_i2v.sh
gushiqiao's avatar
gushiqiao committed
111
```
gushiqiao's avatar
gushiqiao committed
112

gushiqiao's avatar
gushiqiao committed
113
### 可用模型列表
gushiqiao's avatar
gushiqiao committed
114

gushiqiao's avatar
gushiqiao committed
115
#### Wan2.1 官方模型列表
gushiqiao's avatar
gushiqiao committed
116

gushiqiao's avatar
gushiqiao committed
117
118
119
120
121
122
123
124
125
| 模型名称 | 下载链接 |
|---------|----------|
| Wan2.1-I2V-14B-720P | [链接](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P) |
| Wan2.1-I2V-14B-480P | [链接](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P) |
| Wan2.1-T2V-14B | [链接](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) |
| Wan2.1-T2V-1.3B | [链接](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) |
| Wan2.1-FLF2V-14B-720P | [链接](https://huggingface.co/Wan-AI/Wan2.1-FLF2V-14B-720P) |
| Wan2.1-VACE-14B | [链接](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) |
| Wan2.1-VACE-1.3B | [链接](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B) |
gushiqiao's avatar
gushiqiao committed
126

gushiqiao's avatar
gushiqiao committed
127
#### Wan2.2 官方模型列表
gushiqiao's avatar
gushiqiao committed
128

gushiqiao's avatar
gushiqiao committed
129
130
131
132
133
134
| 模型名称 | 下载链接 |
|---------|----------|
| Wan2.2-I2V-A14B | [链接](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) |
| Wan2.2-T2V-A14B | [链接](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B) |
| Wan2.2-TI2V-5B | [链接](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B) |
| Wan2.2-Animate-14B | [链接](https://huggingface.co/Wan-AI/Wan2.2-Animate-14B) |
gushiqiao's avatar
gushiqiao committed
135

gushiqiao's avatar
gushiqiao committed
136
### 使用提示
gushiqiao's avatar
gushiqiao committed
137

gushiqiao's avatar
gushiqiao committed
138
139
140
141
142
143
144
> 💡 **量化模型使用**:如需使用量化模型,可参考[模型转换脚本](https://github.com/ModelTC/LightX2V/blob/main/tools/convert/readme_zh.md)进行转换,或直接使用下方格式二中的预转换量化模型
>
> 💡 **显存优化**:对于 RTX 4090 24GB 或更小显存的设备,建议结合量化技术和 CPU 卸载功能:
> - 量化配置:参考[量化技术文档](../method_tutorials/quantization.md)
> - CPU 卸载:参考[参数卸载文档](../method_tutorials/offload.md)
> - Wan2.1 配置:参考 [offload 配置文件](https://github.com/ModelTC/LightX2V/tree/main/configs/offload)
> - Wan2.2 配置:参考 [wan22 配置文件](https://github.com/ModelTC/LightX2V/tree/main/configs/wan22) 中以 `4090` 结尾的配置
gushiqiao's avatar
gushiqiao committed
145

gushiqiao's avatar
gushiqiao committed
146
---
gushiqiao's avatar
gushiqiao committed
147

gushiqiao's avatar
gushiqiao committed
148
## 🗂️ 格式二:LightX2V 单文件模型(推荐)
gushiqiao's avatar
gushiqiao committed
149

gushiqiao's avatar
gushiqiao committed
150
151
152
### 模型仓库
- [Wan2.1-LightX2V](https://huggingface.co/lightx2v/Wan2.1-Distill-Models)
- [Wan2.2-LightX2V](https://huggingface.co/lightx2v/Wan2.2-Distill-Models)
gushiqiao's avatar
gushiqiao committed
153

gushiqiao's avatar
gushiqiao committed
154
155
156
157
158
159
160
161
162
163
164
### 模型特点
- **单文件管理**:单个 safetensors 文件,易于管理和部署
- **多精度支持**:提供原始精度、FP8、INT8 等多种精度版本
- **蒸馏加速**:支持 4-step 快速推理
- **工具兼容**:兼容 ComfyUI 等其他工具

**示例**
- `wan2.1_i2v_720p_lightx2v_4step.safetensors` - 720P 图生视频原始精度
- `wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors` - 720P 图生视频 FP8 量化
- `wan2.1_i2v_480p_int8_lightx2v_4step.safetensors` - 480P 图生视频 INT8 量化
- ...
gushiqiao's avatar
gushiqiao committed
165

gushiqiao's avatar
gushiqiao committed
166
### Wan2.1 单文件模型
gushiqiao's avatar
gushiqiao committed
167

gushiqiao's avatar
gushiqiao committed
168
#### 场景 A:下载单个模型文件
gushiqiao's avatar
gushiqiao committed
169

gushiqiao's avatar
gushiqiao committed
170
**步骤 1:选择并下载模型**
gushiqiao's avatar
gushiqiao committed
171
172

```bash
gushiqiao's avatar
gushiqiao committed
173
174
# 创建模型目录
mkdir -p ./models/wan2.1_i2v_720p
gushiqiao's avatar
gushiqiao committed
175

gushiqiao's avatar
gushiqiao committed
176
177
178
179
180
# 下载 720P 图生视频 FP8 量化模型
huggingface-cli download lightx2v/Wan2.1-Distill-Models \
    --local-dir ./models/wan2.1_i2v_720p \
    --include "wan2.1_i2v_720p_lightx2v_4step.safetensors"
```
gushiqiao's avatar
gushiqiao committed
181

gushiqiao's avatar
gushiqiao committed
182
**步骤 2:配置启动脚本**
gushiqiao's avatar
gushiqiao committed
183
184

```bash
gushiqiao's avatar
gushiqiao committed
185
186
187
188
189
190
191
# 在启动脚本中设置(指向包含模型文件的目录)
model_path=./models/wan2.1_i2v_720p
lightx2v_path=/path/to/LightX2V

# 运行脚本
cd LightX2V/scripts
bash wan/run_wan_i2v_distill_4step_cfg.sh
gushiqiao's avatar
gushiqiao committed
192
193
```

gushiqiao's avatar
gushiqiao committed
194
> 💡 **提示**:当目录下只有一个模型文件时,LightX2V 会自动加载该文件。
gushiqiao's avatar
gushiqiao committed
195

gushiqiao's avatar
gushiqiao committed
196
#### 场景 B:下载多个模型文件
gushiqiao's avatar
gushiqiao committed
197

gushiqiao's avatar
gushiqiao committed
198
当您下载了多个不同精度的模型到同一目录时,需要在配置文件中明确指定使用哪个模型。
gushiqiao's avatar
gushiqiao committed
199

gushiqiao's avatar
gushiqiao committed
200
**步骤 1:下载多个模型**
gushiqiao's avatar
gushiqiao committed
201
202

```bash
gushiqiao's avatar
gushiqiao committed
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
# 创建模型目录
mkdir -p ./models/wan2.1_i2v_720p_multi

# 下载原始精度模型
huggingface-cli download lightx2v/Wan2.1-Distill-Models \
    --local-dir ./models/wan2.1_i2v_720p_multi \
    --include "wan2.1_i2v_720p_lightx2v_4step.safetensors"

# 下载 FP8 量化模型
huggingface-cli download lightx2v/Wan2.1-Distill-Models \
    --local-dir ./models/wan2.1_i2v_720p_multi \
    --include "wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors"

# 下载 INT8 量化模型
huggingface-cli download lightx2v/Wan2.1-Distill-Models \
    --local-dir ./models/wan2.1_i2v_720p_multi \
    --include "wan2.1_i2v_720p_int8_lightx2v_4step.safetensors"
gushiqiao's avatar
gushiqiao committed
220
221
```

gushiqiao's avatar
gushiqiao committed
222
**目录结构**
gushiqiao's avatar
gushiqiao committed
223

gushiqiao's avatar
gushiqiao committed
224
225
226
227
228
229
230
```
wan2.1_i2v_720p_multi/
├── wan2.1_i2v_720p_lightx2v_4step.safetensors                    # 原始精度
├── wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors   # FP8 量化
└── wan2.1_i2v_720p_int8_lightx2v_4step.safetensors              # INT8 量化
└── t5/clip/vae/config.json/xlm-roberta-large/google等其他组件       # 需要手动组织
```
gushiqiao's avatar
gushiqiao committed
231

gushiqiao's avatar
gushiqiao committed
232
**步骤 2:在配置文件中指定模型**
gushiqiao's avatar
gushiqiao committed
233

gushiqiao's avatar
gushiqiao committed
234
编辑配置文件(如 `configs/distill/wan_i2v_distill_4step_cfg.json`):
gushiqiao's avatar
gushiqiao committed
235

gushiqiao's avatar
gushiqiao committed
236
237
238
239
```json
{
    // 使用原始精度模型
    "dit_original_ckpt": "./models/wan2.1_i2v_720p_multi/wan2.1_i2v_720p_lightx2v_4step.safetensors",
gushiqiao's avatar
gushiqiao committed
240

gushiqiao's avatar
gushiqiao committed
241
242
243
244
    // 或使用 FP8 量化模型
    // "dit_quantized_ckpt": "./models/wan2.1_i2v_720p_multi/wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors",
    // "dit_quantized": true,
    // "dit_quant_scheme": "fp8-vllm",
gushiqiao's avatar
gushiqiao committed
245

gushiqiao's avatar
gushiqiao committed
246
247
248
249
    // 或使用 INT8 量化模型
    // "dit_quantized_ckpt": "./models/wan2.1_i2v_720p_multi/wan2.1_i2v_720p_int8_lightx2v_4step.safetensors",
    // "dit_quantized": true,
    // "dit_quant_scheme": "int8-vllm",
gushiqiao's avatar
gushiqiao committed
250

gushiqiao's avatar
gushiqiao committed
251
    // 其他配置...
gushiqiao's avatar
gushiqiao committed
252
253
}
```
gushiqiao's avatar
gushiqiao committed
254
### 使用提示
gushiqiao's avatar
gushiqiao committed
255

gushiqiao's avatar
gushiqiao committed
256
257
258
> 💡 **配置参数说明**:
> - **dit_original_ckpt**:用于指定原始精度模型(BF16/FP32/FP16)的路径
> - **dit_quantized_ckpt**:用于指定量化模型(FP8/INT8)的路径,需配合 `dit_quantized` 和 `dit_quant_scheme` 参数使用
gushiqiao's avatar
gushiqiao committed
259

gushiqiao's avatar
gushiqiao committed
260
**步骤 3:启动推理**
gushiqiao's avatar
gushiqiao committed
261

gushiqiao's avatar
gushiqiao committed
262
263
264
265
```bash
cd LightX2V/scripts
bash wan/run_wan_i2v_distill_4step_cfg.sh
```
gushiqiao's avatar
gushiqiao committed
266

gushiqiao's avatar
gushiqiao committed
267
268
269
270
271
272
273
274
275
276
277
278
279
### Wan2.2 单文件模型

#### 目录结构要求

使用 Wan2.2 单文件模型时,需要手动创建特定的目录结构:

```
wan2.2_models/
├── high_noise_model/                                    # 高噪声模型目录(必须)
│   └── wan2.2_i2v_A14b_high_noise_lightx2v_4step.safetensors  # 高噪声模型文件
└── low_noise_model/                                     # 低噪声模型目录(必须)
│   └── wan2.2_i2v_A14b_low_noise_lightx2v_4step.safetensors  # 低噪声模型文件
└── t5/vae/config.json/xlm-roberta-large/google等其他组件       # 需要手动组织
gushiqiao's avatar
gushiqiao committed
280
281
```

gushiqiao's avatar
gushiqiao committed
282
#### 场景 A:每个目录下只有一个模型文件
gushiqiao's avatar
gushiqiao committed
283
284

```bash
gushiqiao's avatar
gushiqiao committed
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
# 创建必需的子目录
mkdir -p ./models/wan2.2_models/high_noise_model
mkdir -p ./models/wan2.2_models/low_noise_model

# 下载高噪声模型到对应目录
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
    --local-dir ./models/wan2.2_models/high_noise_model \
    --include "wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors"

# 下载低噪声模型到对应目录
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
    --local-dir ./models/wan2.2_models/low_noise_model \
    --include "wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors"

# 配置启动脚本(指向父目录)
model_path=./models/wan2.2_models
lightx2v_path=/path/to/LightX2V

# 运行脚本
gushiqiao's avatar
gushiqiao committed
304
cd LightX2V/scripts
gushiqiao's avatar
gushiqiao committed
305
bash wan22/run_wan22_moe_i2v_distill.sh
gushiqiao's avatar
gushiqiao committed
306
307
```

gushiqiao's avatar
gushiqiao committed
308
309
310
> 💡 **提示**:当每个子目录下只有一个模型文件时,LightX2V 会自动加载。

#### 场景 B:每个目录下有多个模型文件
gushiqiao's avatar
gushiqiao committed
311

gushiqiao's avatar
gushiqiao committed
312
当您在 `high_noise_model/``low_noise_model/` 目录下分别放置了多个不同精度的模型时,需要在配置文件中明确指定。
gushiqiao's avatar
gushiqiao committed
313
314

```bash
gushiqiao's avatar
gushiqiao committed
315
316
317
318
319
320
321
322
323
324
325
326
327
# 创建目录
mkdir -p ./models/wan2.2_models_multi/high_noise_model
mkdir -p ./models/wan2.2_models_multi/low_noise_model

# 下载高噪声模型的多个版本
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
    --local-dir ./models/wan2.2_models_multi/high_noise_model \
    --include "wan2.2_i2v_A14b_high_noise_*.safetensors"

# 下载低噪声模型的多个版本
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
    --local-dir ./models/wan2.2_models_multi/low_noise_model \
    --include "wan2.2_i2v_A14b_low_noise_*.safetensors"
gushiqiao's avatar
gushiqiao committed
328
329
```

gushiqiao's avatar
gushiqiao committed
330
**目录结构**
gushiqiao's avatar
gushiqiao committed
331

gushiqiao's avatar
gushiqiao committed
332
333
334
335
336
337
338
339
340
341
342
```
wan2.2_models_multi/
├── high_noise_model/
│   ├── wan2.2_i2v_A14b_high_noise_lightx2v_4step.safetensors        # 原始精度
│   ├── wan2.2_i2v_A14b_high_noise_fp8_e4m3_lightx2v_4step.safetensors    # FP8 量化
│   └── wan2.2_i2v_A14b_high_noise_int8_lightx2v_4step.safetensors   # INT8 量化
└── low_noise_model/
    ├── wan2.2_i2v_A14b_low_noise_lightx2v_4step.safetensors         # 原始精度
    ├── wan2.2_i2v_A14b_low_noise_fp8_e4m3_lightx2v_4step.safetensors     # FP8 量化
    └── wan2.2_i2v_A14b_low_noise_int8_lightx2v_4step.safetensors    # INT8 量化
```
gushiqiao's avatar
gushiqiao committed
343

gushiqiao's avatar
gushiqiao committed
344
**配置文件设置**
gushiqiao's avatar
gushiqiao committed
345

gushiqiao's avatar
gushiqiao committed
346
347
```json
{
gushiqiao's avatar
gushiqiao committed
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
    // 使用原始精度模型
    "high_noise_original_ckpt": "./models/wan2.2_models_multi/high_noise_model/wan2.2_i2v_A14b_high_noise_lightx2v_4step.safetensors",
    "low_noise_original_ckpt": "./models/wan2.2_models_multi/low_noise_model/wan2.2_i2v_A14b_low_noise_lightx2v_4step.safetensors",

    // 或使用 FP8 量化模型
    // "high_noise_quantized_ckpt": "./models/wan2.2_models_multi/high_noise_model/wan2.2_i2v_A14b_high_noise_fp8_e4m3_lightx2v_4step.safetensors",
    // "low_noise_quantized_ckpt": "./models/wan2.2_models_multi/low_noise_model/wan2.2_i2v_A14b_low_noise_fp8_e4m3_lightx2v_4step.safetensors",
    // "dit_quantized": true,
    // "dit_quant_scheme": "fp8-vllm"

    // 或使用 INT8 量化模型
    // "high_noise_quantized_ckpt": "./models/wan2.2_models_multi/high_noise_model/wan2.2_i2v_A14b_high_noise_int8_lightx2v_4step.safetensors",
    // "low_noise_quantized_ckpt": "./models/wan2.2_models_multi/low_noise_model/wan2.2_i2v_A14b_low_noise_int8_lightx2v_4step.safetensors",
    // "dit_quantized": true,
    // "dit_quant_scheme": "int8-vllm"
gushiqiao's avatar
gushiqiao committed
363
364
}
```
gushiqiao's avatar
gushiqiao committed
365

gushiqiao's avatar
gushiqiao committed
366
### 使用提示
gushiqiao's avatar
gushiqiao committed
367

gushiqiao's avatar
gushiqiao committed
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
> 💡 **配置参数说明**:
> - **high_noise_original_ckpt** / **low_noise_original_ckpt**:用于指定原始精度模型(BF16/FP32/FP16)的路径
> - **high_noise_quantized_ckpt** / **low_noise_quantized_ckpt**:用于指定量化模型(FP8/INT8)的路径,需配合 `dit_quantized` 和 `dit_quant_scheme` 参数使用


### 可用模型列表

#### Wan2.1 单文件模型列表

**图生视频模型(I2V)**

| 文件名 | 精度 | 说明 |
|--------|------|------|
| `wan2.1_i2v_480p_lightx2v_4step.safetensors` | BF16 | 4步模型原始精度 |
| `wan2.1_i2v_480p_scaled_fp8_e4m3_lightx2v_4step.safetensors` | FP8 | 4步模型FP8 量化 |
| `wan2.1_i2v_480p_int8_lightx2v_4step.safetensors` | INT8 | 4步模型INT8 量化 |
| `wan2.1_i2v_480p_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors` | FP8 | 4步模型ComfyUI 格式 |
| `wan2.1_i2v_720p_lightx2v_4step.safetensors` | BF16 | 4步模型原始精度 |
| `wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors` | FP8 | 4步模型FP8 量化 |
| `wan2.1_i2v_720p_int8_lightx2v_4step.safetensors` | INT8 | 4步模型INT8 量化 |
| `wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors` | FP8 | 4步模型ComfyUI 格式 |

**文生视频模型(T2V)**

| 文件名 | 精度 | 说明 |
|--------|------|------|
| `wan2.1_t2v_14b_lightx2v_4step.safetensors` | BF16 | 4步模型原始精度 |
| `wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors` | FP8 | 4步模型FP8 量化 |
| `wan2.1_t2v_14b_int8_lightx2v_4step.safetensors` | INT8 | 4步模型INT8 量化 |
| `wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors` | FP8 | 4步模型ComfyUI 格式 |

#### Wan2.2 单文件模型列表

**图生视频模型(I2V)- A14B 系列**

| 文件名 | 精度 | 说明 |
|--------|------|------|
| `wan2.2_i2v_A14b_high_noise_lightx2v_4step.safetensors` | BF16 | 高噪声模型-4步原始精度 |
| `wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors` | FP8 | 高噪声模型-4步FP8量化 |
| `wan2.2_i2v_A14b_high_noise_int8_lightx2v_4step.safetensors` | INT8 | 高噪声模型-4步INT8量化 |
| `wan2.2_i2v_A14b_low_noise_lightx2v_4step.safetensors` | BF16 | 低噪声模型-4步原始精度 |
| `wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors` | FP8 | 低噪声模型-4步FP8量化 |
| `wan2.2_i2v_A14b_low_noise_int8_lightx2v_4step.safetensors` | INT8 | 低噪声模型-4步INT8量化 |

> 💡 **使用提示**:
> - Wan2.2 模型采用双噪声架构,需要同时下载高噪声(high_noise)和低噪声(low_noise)模型
> - 详细的目录组织方式请参考上方"Wan2.2 单文件模型"部分

---

## 🗂️ 格式三:LightX2V LoRA 模型

LoRA(Low-Rank Adaptation)模型提供了一种轻量级的模型微调方案,可以在不修改基础模型的情况下实现特定效果的定制化。

### 模型仓库

- **Wan2.1 LoRA 模型**[lightx2v/Wan2.1-Distill-Loras](https://huggingface.co/lightx2v/Wan2.1-Distill-Loras)
- **Wan2.2 LoRA 模型**[lightx2v/Wan2.2-Distill-Loras](https://huggingface.co/lightx2v/Wan2.2-Distill-Loras)

### 使用方式

#### 方式一:离线合并

将 LoRA 权重离线合并到基础模型中,生成新的完整模型文件。

**操作步骤**

参考 [模型转换文档](https://github.com/ModelTC/lightx2v/tree/main/tools/convert/readme_zh.md) 进行离线合并。

**优点**
- ✅ 推理时无需额外加载 LoRA
- ✅ 性能更优

**缺点**
- ❌ 需要额外存储空间
- ❌ 切换不同 LoRA 需要重新合并

#### 方式二:在线加载

在推理时动态加载 LoRA 权重,无需修改基础模型。

**LoRA 应用原理**

```python
# LoRA 权重应用公式
# W' = W + (alpha/rank) * B @ A
# 其中:B = up_proj (out_features, rank)
#      A = down_proj (rank, in_features)

if weights_dict["alpha"] is not None:
    lora_alpha = weights_dict["alpha"] / lora_down.shape[0]
elif alpha is not None:
    lora_alpha = alpha / lora_down.shape[0]
else:
    lora_alpha = 1.0
gushiqiao's avatar
gushiqiao committed
463
```
gushiqiao's avatar
gushiqiao committed
464

gushiqiao's avatar
gushiqiao committed
465
466
467
**配置方法**

**Wan2.1 LoRA 配置**
gushiqiao's avatar
gushiqiao committed
468

gushiqiao's avatar
gushiqiao committed
469
470
```json
{
gushiqiao's avatar
gushiqiao committed
471
472
473
474
475
476
477
  "lora_configs": [
    {
      "path": "wan2.1_i2v_lora_rank64_lightx2v_4step.safetensors",
      "strength": 1.0,
      "alpha": null
    }
  ]
gushiqiao's avatar
gushiqiao committed
478
479
480
}
```

gushiqiao's avatar
gushiqiao committed
481
482
483
**Wan2.2 LoRA 配置**

由于 Wan2.2 采用双模型架构(高噪声/低噪声),需要分别为两个模型配置 LoRA:
gushiqiao's avatar
gushiqiao committed
484
485
486

```json
{
gushiqiao's avatar
gushiqiao committed
487
488
489
490
491
492
493
494
495
496
497
498
499
500
  "lora_configs": [
    {
      "name": "low_noise_model",
      "path": "wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step.safetensors",
      "strength": 1.0,
      "alpha": null
    },
    {
      "name": "high_noise_model",
      "path": "wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step.safetensors",
      "strength": 1.0,
      "alpha": null
    }
  ]
gushiqiao's avatar
gushiqiao committed
501
502
}
```
gushiqiao's avatar
gushiqiao committed
503

gushiqiao's avatar
gushiqiao committed
504
**参数说明**
gushiqiao's avatar
gushiqiao committed
505

gushiqiao's avatar
gushiqiao committed
506
507
508
509
510
511
| 参数 | 说明 | 默认值 |
|------|------|--------|
| `path` | LoRA 模型文件路径 | 必填 |
| `strength` | LoRA 强度系数,范围 [0.0, 1.0] | 1.0 |
| `alpha` | LoRA 缩放因子,`null` 时使用模型内置值,如果没有内置值默认1 | null |
| `name` | (仅 Wan2.2)指定应用到哪个模型 | 必填 |
gushiqiao's avatar
gushiqiao committed
512

gushiqiao's avatar
gushiqiao committed
513
514
515
516
**优点**
- ✅ 灵活切换不同 LoRA
- ✅ 节省存储空间
- ✅ 可动态调整 LoRA 强度
gushiqiao's avatar
gushiqiao committed
517

gushiqiao's avatar
gushiqiao committed
518
519
520
**缺点**
- ❌ 推理时需额外加载时间
- ❌ 略微增加显存占用
gushiqiao's avatar
gushiqiao committed
521

gushiqiao's avatar
gushiqiao committed
522
---
gushiqiao's avatar
gushiqiao committed
523

gushiqiao's avatar
gushiqiao committed
524
## 📚 相关资源
gushiqiao's avatar
gushiqiao committed
525

gushiqiao's avatar
gushiqiao committed
526
527
528
529
### 官方仓库
- [LightX2V GitHub](https://github.com/ModelTC/LightX2V)
- [LightX2V 单文件模型仓库](https://huggingface.co/lightx2v/Wan2.1-Distill-Models)
- [Wan-AI 官方模型仓库](https://huggingface.co/Wan-AI)
gushiqiao's avatar
gushiqiao committed
530

gushiqiao's avatar
gushiqiao committed
531
### 模型下载链接
gushiqiao's avatar
gushiqiao committed
532

gushiqiao's avatar
gushiqiao committed
533
534
**Wan2.1 系列**
- [Wan2.1 Collection](https://huggingface.co/collections/Wan-AI/wan21-68ac4ba85372ae5a8e282a1b)
gushiqiao's avatar
gushiqiao committed
535

gushiqiao's avatar
gushiqiao committed
536
537
**Wan2.2 系列**
- [Wan2.2 Collection](https://huggingface.co/collections/Wan-AI/wan22-68ac4ae80a8b477e79636fc8)
gushiqiao's avatar
gushiqiao committed
538

gushiqiao's avatar
gushiqiao committed
539
540
541
**LightX2V 单文件模型**
- [Wan2.1-Distill-Models](https://huggingface.co/lightx2v/Wan2.1-Distill-Models)
- [Wan2.2-Distill-Models](https://huggingface.co/lightx2v/Wan2.2-Distill-Models)
gushiqiao's avatar
gushiqiao committed
542

gushiqiao's avatar
gushiqiao committed
543
544
545
### 文档链接
- [量化技术文档](../method_tutorials/quantization.md)
- [参数卸载文档](../method_tutorials/offload.md)
gushiqiao's avatar
gushiqiao committed
546
- [配置文件示例](https://github.com/ModelTC/LightX2V/tree/main/configs)
gushiqiao's avatar
gushiqiao committed
547
548
549

---

gushiqiao's avatar
gushiqiao committed
550
551
552
553
554
555
556
557
558
通过本文档,您应该能够:

✅ 理解 LightX2V 支持的所有模型格式
✅ 根据需求选择合适的模型和精度
✅ 正确下载和组织模型文件
✅ 配置启动参数并成功运行推理
✅ 解决常见的模型加载问题

如有其他问题,欢迎在 [GitHub Issues](https://github.com/ModelTC/LightX2V/issues) 中提问。