> - **dit_original_ckpt**: Used to specify the path to original precision models (BF16/FP32/FP16)
> - **dit_quantized_ckpt**: Used to specify the path to quantized models (FP8/INT8), must be used with `dit_quantized` and `dit_quant_scheme` parameters
**Step 3: Start Inference**
**Step 4: Start Inference**
```bash
cd LightX2V/scripts
bash wan/run_wan_i2v_distill_4step_cfg.sh
```
> 💡 **Tip**: Other components (T5, CLIP, VAE, tokenizer, etc.) need to be manually organized into the model directory
### Wan2.2 Single-File Models
#### Directory Structure Requirements
...
...
@@ -273,10 +286,10 @@ When using Wan2.2 single-file models, you need to manually create a specific dir
```
wan2.2_models/
├── high_noise_model/ # High-noise model directory (required)
│ └── wan2.2_i2v_A14b_high_noise_lightx2v_4step.safetensors # High-noise model file
└── low_noise_model/ # Low-noise model directory (required)
│ └── wan2.2_i2v_A14b_low_noise_lightx2v_4step.safetensors # Low-noise model file
└── t5/vae/config.json/xlm-roberta-large/google and other components # Manually organized