readme.md 3.58 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
# 模型转换工具

A powerful utility for converting model weights between different formats and performing quantization tasks.

## Diffusers
Facilitates mutual conversion between diffusers architecture and lightx2v architecture

### Lightx2v->Diffusers
```bash
python converter.py \
       --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P \
       --output /Path/To/Wan2.1-I2V-14B-480P-Diffusers \
gushiqiao's avatar
gushiqiao committed
13
14
       --direction forward \
       --save_by_block
15
16
17
18
19
20
21
```

### Diffusers->Lightx2v
```bash
python converter.py \
       --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P-Diffusers \
       --output /Path/To/Wan2.1-I2V-14B-480P \
gushiqiao's avatar
gushiqiao committed
22
23
       --direction backward \
       --save_by_block
24
25
26
27
28
29
30
31
32
33
34
35
36
```


## Quantization
This tool supports converting fp32/fp16/bf16 model weights to INT8、FP8 type.


### Wan DIT

```bash
python converter.py \
    --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P/ \
    --output /Path/To/output \
gushiqiao's avatar
Fix  
gushiqiao committed
37
    --output_ext .safetensors \
38
    --output_name wan_int8 \
gushiqiao's avatar
gushiqiao committed
39
    --linear_dtype torch.int8 \
gushiqiao's avatar
gushiqiao committed
40
41
42
    --model_type wan_dit \
    --quantized \
    --save_by_block
43
44
45
46
47
48
```

```bash
python converter.py \
    --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P/ \
    --output /Path/To/output \
gushiqiao's avatar
Fix  
gushiqiao committed
49
    --output_ext .safetensors \
50
    --output_name wan_fp8 \
gushiqiao's avatar
gushiqiao committed
51
    --linear_dtype torch.float8_e4m3fn \
gushiqiao's avatar
gushiqiao committed
52
53
54
    --model_type wan_dit \
    --quantized \
    --save_by_block
55
56
```

GoatWu's avatar
GoatWu committed
57
58
59
60
61
62
63
64
### Wan DiT + LoRA

```bash
python converter.py \
    --source /Path/To/Wan-AI/Wan2.1-T2V-14B/ \
    --output /Path/To/output \
    --output_ext .safetensors \
    --output_name wan_int8 \
gushiqiao's avatar
gushiqiao committed
65
    --linear_dtype torch.int8 \
GoatWu's avatar
GoatWu committed
66
67
    --model_type wan_dit \
    --lora_path /Path/To/LoRA1/ /Path/To/LoRA2/ \
gushiqiao's avatar
gushiqiao committed
68
69
70
    --lora_alpha 1.0 1.0 \
    --quantized \
    --save_by_block
GoatWu's avatar
GoatWu committed
71
72
```

73
74
75
76
77
78
### Hunyuan DIT

```bash
python converter.py \
    --source /Path/To/hunyuan/lightx2v_format/i2v/ \
    --output /Path/To/output \
gushiqiao's avatar
gushiqiao committed
79
    --output_ext ..safetensors \
80
    --output_name hunyuan_int8 \
gushiqiao's avatar
gushiqiao committed
81
    --linear_dtype torch.int8 \
gushiqiao's avatar
gushiqiao committed
82
83
    --model_type hunyuan_dit \
    --quantized
84
85
86
87
88
89
```

```bash
python converter.py \
    --source /Path/To/hunyuan/lightx2v_format/i2v/ \
    --output /Path/To/output \
gushiqiao's avatar
Fix  
gushiqiao committed
90
    --output_ext .safetensors \
91
    --output_name hunyuan_fp8 \
gushiqiao's avatar
gushiqiao committed
92
    --linear_dtype torch.float8_e4m3fn \
gushiqiao's avatar
gushiqiao committed
93
94
    --model_type hunyuan_dit \
    --quantized
95
96
97
98
99
100
101
102
103
104
105
```


### Wan T5EncoderModel

```bash
python converter.py \
    --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P/models_t5_umt5-xxl-enc-bf16.pth \
    --output /Path/To/output \
    --output_ext .pth\
    --output_name models_t5_umt5-xxl-enc-int8 \
gushiqiao's avatar
gushiqiao committed
106
107
    --linear_dtype torch.int8 \
    --non_linear_dtype torch.bfloat16 \
gushiqiao's avatar
gushiqiao committed
108
109
    --model_type wan_t5 \
    --quantized
110
111
112
113
114
```

```bash
python converter.py \
    --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P/models_t5_umt5-xxl-enc-bf16.pth \
gushiqiao's avatar
gushiqiao committed
115
    --output /Path/To/Wan-AI/Wan2.1-I2V-14B-480P/fp8 \
116
117
    --output_ext .pth\
    --output_name models_t5_umt5-xxl-enc-fp8 \
gushiqiao's avatar
gushiqiao committed
118
119
    --linear_dtype torch.float8_e4m3fn \
    --non_linear_dtype torch.bfloat16 \
gushiqiao's avatar
gushiqiao committed
120
121
    --model_type wan_t5 \
    --quantized
122
123
124
125
126
127
128
129
130
131
```


### Wan CLIPModel

```bash
python converter.py \
  --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth \
  --output /Path/To/output \
  --output_ext .pth \
gushiqiao's avatar
gushiqiao committed
132
  --output_name clip-int8 \
gushiqiao's avatar
gushiqiao committed
133
134
  --linear_dtype torch.int8 \
  --non_linear_dtype torch.float16 \
gushiqiao's avatar
gushiqiao committed
135
136
  --model_type wan_clip \
  --quantized
137
138
139
140
141

```
```bash
python converter.py \
  --source /Path/To/Wan-AI/Wan2.1-I2V-14B-480P/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth \
gushiqiao's avatar
gushiqiao committed
142
  --output ./output \
143
  --output_ext .pth \
gushiqiao's avatar
gushiqiao committed
144
  --output_name clip-fp8 \
gushiqiao's avatar
gushiqiao committed
145
146
  --linear_dtype torch.float8_e4m3fn \
  --non_linear_dtype torch.float16 \
gushiqiao's avatar
gushiqiao committed
147
148
  --model_type wan_clip \
  --quantized
149
```