"vscode:/vscode.git/clone" did not exist on "acdae5c2e5cd699c355206c6ad7be7def56e0239"
Commit 407b8508 authored by gushiqiao's avatar gushiqiao
Browse files

Fix docs bug

parent 783b3a72
...@@ -12,10 +12,10 @@ View all available models: [LightX2V Official Model Repository](https://huggingf ...@@ -12,10 +12,10 @@ View all available models: [LightX2V Official Model Repository](https://huggingf
### Standard Directory Structure ### Standard Directory Structure
Using `Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V` as an example, the standard file structure is as follows: Using `Wan2.1-I2V-14B-480P-LightX2V` as an example, the standard file structure is as follows:
``` ```
Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/ Wan2.1-I2V-14B-480P-LightX2V/
├── fp8/ # FP8 quantized version (DIT/T5/CLIP) ├── fp8/ # FP8 quantized version (DIT/T5/CLIP)
│ ├── block_xx.safetensors # DIT model FP8 quantized version │ ├── block_xx.safetensors # DIT model FP8 quantized version
│ ├── models_t5_umt5-xxl-enc-fp8.pth # T5 encoder FP8 quantized version │ ├── models_t5_umt5-xxl-enc-fp8.pth # T5 encoder FP8 quantized version
...@@ -31,6 +31,33 @@ Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/ ...@@ -31,6 +31,33 @@ Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/
│ ├── taew2_1.pth # Lightweight VAE (optional) │ ├── taew2_1.pth # Lightweight VAE (optional)
│ └── config.json # Model configuration file │ └── config.json # Model configuration file
├── original/ # Original precision version (DIT/T5/CLIP) ├── original/ # Original precision version (DIT/T5/CLIP)
│ ├── xx.safetensors # DIT model original precision version
│ ├── models_t5_umt5-xxl-enc-bf16.pth # T5 encoder original precision version
│ ├── models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth # CLIP encoder original precision version
│ ├── Wan2.1_VAE.pth # VAE variational autoencoder
│ ├── taew2_1.pth # Lightweight VAE (optional)
│ └── config.json # Model configuration file
```
Using `Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V` as an example, the standard file structure is as follows:
```
Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/
├── distill_fp8/ # FP8 quantized version (DIT/T5/CLIP)
│ ├── block_xx.safetensors # DIT model FP8 quantized version
│ ├── models_t5_umt5-xxl-enc-fp8.pth # T5 encoder FP8 quantized version
│ ├── clip-fp8.pth # CLIP encoder FP8 quantized version
│ ├── Wan2.1_VAE.pth # VAE variational autoencoder
│ ├── taew2_1.pth # Lightweight VAE (optional)
│ └── config.json # Model configuration file
├── distill_int8/ # INT8 quantized version (DIT/T5/CLIP)
│ ├── block_xx.safetensors # DIT model INT8 quantized version
│ ├── models_t5_umt5-xxl-enc-int8.pth # T5 encoder INT8 quantized version
│ ├── clip-int8.pth # CLIP encoder INT8 quantized version
│ ├── Wan2.1_VAE.pth # VAE variational autoencoder
│ ├── taew2_1.pth # Lightweight VAE (optional)
│ └── config.json # Model configuration file
├── distill_models/ # Original precision version (DIT/T5/CLIP)
│ ├── distill_model.safetensors # DIT model original precision version │ ├── distill_model.safetensors # DIT model original precision version
│ ├── models_t5_umt5-xxl-enc-bf16.pth # T5 encoder original precision version │ ├── models_t5_umt5-xxl-enc-bf16.pth # T5 encoder original precision version
│ ├── models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth # CLIP encoder original precision version │ ├── models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth # CLIP encoder original precision version
...@@ -148,24 +175,24 @@ python gradio_demo.py \ ...@@ -148,24 +175,24 @@ python gradio_demo.py \
# Use Hugging Face CLI to selectively download non-quantized version # Use Hugging Face CLI to selectively download non-quantized version
huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ --local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--include "original/*" --include "distill_models/*"
``` ```
```bash ```bash
# Use Hugging Face CLI to selectively download FP8 quantized version # Use Hugging Face CLI to selectively download FP8 quantized version
huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ --local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--include "fp8/*" --include "distill_fp8/*"
``` ```
```bash ```bash
# Use Hugging Face CLI to selectively download INT8 quantized version # Use Hugging Face CLI to selectively download INT8 quantized version
huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ --local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--include "int8/*" --include "distill_int8/*"
``` ```
> **Important Note**: When starting inference scripts or Gradio, the `model_path` parameter still needs to be specified as the complete path without the `--include` parameter. For example: `model_path=./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V`, not `./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/int8`. > **Important Note**: When starting inference scripts or Gradio, the `model_path` parameter still needs to be specified as the complete path without the `--include` parameter. For example: `model_path=./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V`, not `./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/distill_int8`.
#### 2. Start Inference #### 2. Start Inference
......
...@@ -12,10 +12,10 @@ ...@@ -12,10 +12,10 @@
### 标准目录结构 ### 标准目录结构
`Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V` 为例,标准文件结构如下: `Wan2.1-I2V-14B-480P-LightX2V` 为例,标准文件结构如下:
``` ```
Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/ Wan2.1-I2V-14B-480P-LightX2V/
├── fp8/ # FP8 量化版本 (DIT/T5/CLIP) ├── fp8/ # FP8 量化版本 (DIT/T5/CLIP)
│ ├── block_xx.safetensors # DIT 模型 FP8 量化版本 │ ├── block_xx.safetensors # DIT 模型 FP8 量化版本
│ ├── models_t5_umt5-xxl-enc-fp8.pth # T5 编码器 FP8 量化版本 │ ├── models_t5_umt5-xxl-enc-fp8.pth # T5 编码器 FP8 量化版本
...@@ -31,6 +31,34 @@ Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/ ...@@ -31,6 +31,34 @@ Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/
│ ├── taew2_1.pth # 轻量级 VAE (可选) │ ├── taew2_1.pth # 轻量级 VAE (可选)
│ └── config.json # 模型配置文件 │ └── config.json # 模型配置文件
├── original/ # 原始精度版本 (DIT/T5/CLIP) ├── original/ # 原始精度版本 (DIT/T5/CLIP)
│ ├── xx.safetensors # DIT 模型原始精度版本
│ ├── models_t5_umt5-xxl-enc-bf16.pth # T5 编码器原始精度版本
│ ├── models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth # CLIP 编码器原始精度版本
│ ├── Wan2.1_VAE.pth # VAE 变分自编码器
│ ├── taew2_1.pth # 轻量级 VAE (可选)
│ └── config.json # 模型配置文件
```
`Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V` 为例,标准文件结构如下:
```
Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/
├── distill_fp8/ # FP8 量化版本 (DIT/T5/CLIP)
│ ├── block_xx.safetensors # DIT 模型 FP8 量化版本
│ ├── models_t5_umt5-xxl-enc-fp8.pth # T5 编码器 FP8 量化版本
│ ├── clip-fp8.pth # CLIP 编码器 FP8 量化版本
│ ├── Wan2.1_VAE.pth # VAE 变分自编码器
│ ├── taew2_1.pth # 轻量级 VAE (可选)
│ └── config.json # 模型配置文件
├── distill_int8/ # INT8 量化版本 (DIT/T5/CLIP)
│ ├── block_xx.safetensors # DIT 模型 INT8 量化版本
│ ├── models_t5_umt5-xxl-enc-int8.pth # T5 编码器 INT8 量化版本
│ ├── clip-int8.pth # CLIP 编码器 INT8 量化版本
│ ├── Wan2.1_VAE.pth # VAE 变分自编码器
│ ├── taew2_1.pth # 轻量级 VAE (可选)
│ └── config.json # 模型配置文件
├── distill_models/ # 原始精度版本 (DIT/T5/CLIP)
│ ├── distill_model.safetensors # DIT 模型原始精度版本 │ ├── distill_model.safetensors # DIT 模型原始精度版本
│ ├── models_t5_umt5-xxl-enc-bf16.pth # T5 编码器原始精度版本 │ ├── models_t5_umt5-xxl-enc-bf16.pth # T5 编码器原始精度版本
│ ├── models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth # CLIP 编码器原始精度版本 │ ├── models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth # CLIP 编码器原始精度版本
...@@ -148,24 +176,24 @@ python gradio_demo_zh.py \ ...@@ -148,24 +176,24 @@ python gradio_demo_zh.py \
# 使用 Hugging Face CLI 选择性下载非量化版本 # 使用 Hugging Face CLI 选择性下载非量化版本
huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ --local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--include "original/*" --include "distill_models/*"
``` ```
```bash ```bash
# 使用 Hugging Face CLI 选择性下载 FP8 量化版本 # 使用 Hugging Face CLI 选择性下载 FP8 量化版本
huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ --local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--include "fp8/*" --include "distill_fp8/*"
``` ```
```bash ```bash
# 使用 Hugging Face CLI 选择性下载 INT8 量化版本 # 使用 Hugging Face CLI 选择性下载 INT8 量化版本
huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ huggingface-cli download lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \ --local-dir ./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V \
--include "int8/*" --include "distill_int8/*"
``` ```
> **重要提示**:当启动推理脚本或Gradio时,`model_path` 参数仍需要指定为不包含 `--include` 的完整路径。例如:`model_path=./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V`,而不是 `./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/int8`。 > **重要提示**:当启动推理脚本或Gradio时,`model_path` 参数仍需要指定为不包含 `--include` 的完整路径。例如:`model_path=./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V`,而不是 `./Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-LightX2V/distill_int8`。
#### 2. 启动推理 #### 2. 启动推理
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment