# Trying T2V and I2V with Wan21-14B This document contains usage examples for the Wan2.1-T2V-14B and Wan2.1-I2V-14B-480P / Wan2.1-I2V-14B-720P models. ## Prepare the environment Please refer to [01.PrepareEnv](01.PrepareEnv.md) ## Getting started Prepare the models: ```bash # Download from Hugging Face hf download Wan-AI/Wan2.1-T2V-14B --local-dir Wan-AI/Wan2.1-T2V-14B hf download Wan-AI/Wan2.1-I2V-14B-480P --local-dir Wan-AI/Wan2.1-I2V-14B-480P hf download Wan-AI/Wan2.1-I2V-14B-720P --local-dir Wan-AI/Wan2.1-I2V-14B-720P # Download distillation models hf download lightx2v/Wan2.1-Distill-Models --local-dir lightx2v/Wan2.1-Distill-Models hf download lightx2v/Wan2.1-Distill-Loras --local-dir lightx2v/Wan2.1-Distill-Loras ``` We provide three ways to run the Wan2.1-14B models to generate videos: 1. Run the provided scripts (quick verification). - Single-GPU inference - Single-GPU offload inference - Multi-GPU parallel inference 2. Start a server and send requests (repeated inference / production). - Single-GPU inference - Single-GPU offload inference - Multi-GPU parallel inference 3. Use Python code (integration into codebases). - Single-GPU inference - Single-GPU offload inference - Multi-GPU parallel inference ### 1. Run scripts ```bash git clone https://github.com/ModelTC/LightX2V.git # Before running the scripts, replace `lightx2v_path` and `model_path` with real paths # e.g.: lightx2v_path=/home/user/LightX2V # e.g.: model_path=/home/user/models/Wan-AI/Wan2.1-T2V-14B ``` #### 1.1 Single-GPU inference Wan2.1-T2V-14B model: ```bash # model_path=Wan-AI/Wan2.1-T2V-14B cd LightX2V/scripts/wan bash run_wan_t2v.sh # Distillation (LoRA) cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_lora_4step_cfg.sh # Distillation (merged LoRA) cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_model_4step_cfg.sh # Distillation + FP8 quantized model cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_fp8_4step_cfg.sh ``` Note: In the bash scripts, `model_path` points to the pre-trained original model; in config files, set `lora_configs`, `dit_original_ckpt` and `dit_quantized_ckpt` to the distillation model paths (use absolute paths), for example `/home/user/models/lightx2v/Wan2.1-Distill-Models/wan2.1_i2v_480p_int8_lightx2v_4step.safetensors`. Measured on a single H100 (use `watch -n 1 nvidia-smi` to observe peak GPU memory): - Wan2.1-T2V-14B: Total Cost 278.902019 seconds; peak 43768 MiB - Distill (LoRA): Total Cost 31.365923 seconds; peak 44438 MiB - Distill (merged LoRA): Total Cost 25.794410 seconds; peak 44418 MiB - Distill + FP8: Total Cost 22.000187 seconds; peak 31032 MiB Wan2.1-I2V-14B models: ```bash # Switch `model_path` and `config_json` to try Wan2.1-I2V-14B-480P or Wan2.1-I2V-14B-720P cd LightX2V/scripts/wan bash run_wan_i2v.sh # Distillation (LoRA) cd LightX2V/scripts/wan/distill bash run_wan_i2v_distill_lora_4step_cfg.sh # Distillation (merged LoRA) cd LightX2V/scripts/wan/distill bash run_wan_i2v_distill_model_4step_cfg.sh # Distillation + FP8 cd LightX2V/scripts/wan/distill bash run_wan_i2v_distill_fp8_4step_cfg.sh ``` Measured on a single H100: - Wan2.1-I2V-14B-480P: Total Cost 232.971375 seconds; peak 49872 MiB - Distill (LoRA): Total Cost 277.535991 seconds; peak 49782 MiB - Distill (merged LoRA): Total Cost 26.841140 seconds; peak 49526 MiB - Distill + FP8: Total Cost 25.430433 seconds; peak 34218 MiB #### 1.2 Single-GPU offload inference Enable offload in the config: ```json "cpu_offload": true, "offload_granularity": "model" ``` Then run the same scripts as in 1.1: ```bash cd LightX2V/scripts/wan bash run_wan_t2v.sh # Distillation (LoRA) cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_lora_4step_cfg.sh # Distillation (merged LoRA) cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_model_4step_cfg.sh # Distillation + FP8 cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_fp8_4step_cfg.sh ``` Measured on a single H100: - Wan2.1-T2V-14B: Total Cost 319.019743 seconds; peak 34932 MiB - Distill (LoRA): Total Cost 74.180393 seconds; peak 34562 MiB - Distill (merged LoRA): Total Cost 68.621963 seconds; peak 34562 MiB - Distill + FP8: Total Cost 58.921504 seconds; peak 21290 MiB Wan2.1-I2V-14B measured on single H100: - Wan2.1-I2V-14B-480P: Total Cost 276.509557 seconds; peak 38906 MiB - Distill (LoRA): Total Cost 85.217124 seconds; peak 38556 MiB - Distill (merged LoRA): Total Cost 79.389818 seconds; peak 38556 MiB - Distill + FP8: Total Cost 68.124415 seconds; peak 23400 MiB #### 1.3 Multi-GPU parallel inference Before running, set `CUDA_VISIBLE_DEVICES` to the GPUs you will use and configure the `parallel` parameters so that `cfg_p_size * seq_p_size = number_of_GPUs`. Wan2.1-T2V-14B (example): ```bash cd LightX2V/scripts/dist_infer bash run_wan_t2v_dist_cfg_ulysses.sh # Distillation (LoRA) cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_lora_4step_cfg_ulysses.sh # Distillation (merged LoRA) cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_model_4step_cfg_ulysses.sh # Distillation + FP8 cd LightX2V/scripts/wan/distill bash run_wan_t2v_distill_fp8_4step_cfg_ulysses.sh ``` Measured on 8×H100 (per-GPU peaks): - Wan2.1-T2V-14B: Total Cost 131.553567 seconds; per-GPU peak 44624 MiB - Distill (LoRA): Total Cost 38.337339 seconds; per-GPU peak 43850 MiB - Distill (merged LoRA): Total Cost 29.021527 seconds; per-GPU peak 43470 MiB - Distill + FP8: Total Cost 26.409164 seconds; per-GPU peak 30162 MiB Wan2.1-I2V-14B (example): ```bash cd LightX2V/scripts/dist_infer bash run_wan_i2v_dist_cfg_ulysses.sh # Distillation (LoRA) cd LightX2V/scripts/wan/distill bash run_wan_i2v_distill_lora_4step_cfg_ulysses.sh # Distillation (merged LoRA) cd LightX2V/scripts/wan/distill bash run_wan_i2v_distill_model_4step_cfg_ulysses.sh cd LightX2V/scripts/wan/distill bash run_wan_i2v_distill_fp8_4step_cfg_ulysses.sh ``` Measured on 8×H100: - Wan2.1-I2V-14B-480P: Total Cost 116.455286 seconds; per-GPU peak 49668 MiB - Distill (LoRA): Total Cost 45.899316 seconds; per-GPU peak 48854 MiB - Distill (merged LoRA): Total Cost 33.472992 seconds; per-GPU peak 48674 MiB - Distill + FP8: Total Cost 30.796211 seconds; per-GPU peak 33328 MiB Explanation and example scripts `run_wan_t2v_dist_cfg_ulysses.sh`: ```bash #!/bin/bash # set path firstly lightx2v_path= model_path= export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 # set environment variables source ${lightx2v_path}/scripts/base/base.sh torchrun --nproc_per_node=8 -m lightx2v.infer \ --model_cls wan2.1 \ --task t2v \ --model_path $model_path \ --config_json ${lightx2v_path}/configs/dist_infer/wan_t2v_dist_cfg_ulysses.json \ --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." \ --negative_prompt "camera shake, vivid color tones, overexposure, static, blurred details, subtitles, style marks, artwork, painting-like, still image, overall grayish, worst quality, low quality, JPEG compression artifacts, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn face, deformed, disfigured, malformed limbs, fused fingers, motionless frame, cluttered background, three legs, many people in background, walking backwards" \ --save_result_path ${lightx2v_path}/save_results/output_lightx2v_wan_t2v.mp4 ``` `export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7` uses GPUs 0–7 (eight GPUs total). `source ${lightx2v_path}/scripts/base/base.sh` sets base environment variables. `torchrun --nproc_per_node=8 -m lightx2v.infer` runs multi-GPU inference with 8 processes. `wan_t2v_dist_cfg_ulysses.json`: ```json { "infer_steps": 50, "target_video_length": 81, "text_len": 512, "target_height": 480, "target_width": 832, "self_attn_1_type": "flash_attn3", "cross_attn_1_type": "flash_attn3", "cross_attn_2_type": "flash_attn3", "sample_guide_scale": 6, "sample_shift": 8, "enable_cfg": true, "cpu_offload": false, "parallel": { "seq_p_size": 4, "seq_p_attn_type": "ulysses", "cfg_p_size": 2 } } ``` Key fields: - `infer_steps`: number of inference steps. - `target_video_length`: target frame count (Wan2.1 uses fps=16, so 81 frames ≈ 5 seconds). - `target_height` / `target_width`: frame dimensions. - `self_attn_1_type`, `cross_attn_1_type`, `cross_attn_2_type`: attention operator types; `flash_attn3` is for Hopper GPUs (H100, H20); replace with `flash_attn2` for other GPUs. - `enable_cfg`: if true, CFG runs both positive and negative prompts (better quality but doubles inference time). Set false for CFG-distilled models. - `cpu_offload`: enable CPU offload to reduce GPU memory. If enabled, add `"offload_granularity": "model"` to offload entire model modules. Monitor with `watch -n 1 nvidia-smi`. - `parallel`: parallel inference settings. DiT supports Ulysses and Ring attention modes and CFG parallelism. Parallel inference reduces runtime and per-GPU memory. The example uses cfg + Ulysses with `seq_p_size * cfg_p_size = 8` for 8 GPUs. `wan_t2v_distill_lora_4step_cfg_ulysses.json`: ```json { "infer_steps": 4, "target_video_length": 81, "text_len": 512, "target_height": 480, "target_width": 832, "self_attn_1_type": "flash_attn3", "cross_attn_1_type": "flash_attn3", "cross_attn_2_type": "flash_attn3", "sample_guide_scale": 6, "sample_shift": 5, "enable_cfg": false, "cpu_offload": false, "denoising_step_list": [1000, 750, 500, 250], "lora_configs": [ { "path": "lightx2v/Wan2.1-Distill-Loras/wan2.1_t2v_14b_lora_rank64_lightx2v_4step.safetensors", "strength": 1.0 } ], "parallel": { "seq_p_size": 4, "seq_p_attn_type": "ulysses", "cfg_p_size": 2 } } ``` - `denoising_step_list`: timesteps for the 4-step denoising schedule. - `lora_configs`: LoRA plugin config; use absolute paths. `wan_t2v_distill_model_4step_cfg_ulysses.json`: ```json { "infer_steps": 4, "target_video_length": 81, "text_len": 512, "target_height": 480, "target_width": 832, "self_attn_1_type": "flash_attn3", "cross_attn_1_type": "flash_attn3", "cross_attn_2_type": "flash_attn3", "sample_guide_scale": 6, "sample_shift": 5, "enable_cfg": false, "cpu_offload": false, "denoising_step_list": [1000, 750, 500, 250], "dit_original_ckpt": "lightx2v/Wan2.1-Distill-Models/wan2.1_t2v_14b_lightx2v_4step.safetensors", "parallel": { "seq_p_size": 4, "seq_p_attn_type": "ulysses", "cfg_p_size": 2 } } ``` - `dit_original_ckpt`: path to the merged distillation checkpoint. `wan_t2v_distill_fp8_4step_cfg_ulysses.json`: ```json { "infer_steps": 4, "target_video_length": 81, "text_len": 512, "target_height": 480, "target_width": 832, "self_attn_1_type": "flash_attn3", "cross_attn_1_type": "flash_attn3", "cross_attn_2_type": "flash_attn3", "sample_guide_scale": 6, "sample_shift": 5, "enable_cfg": false, "cpu_offload": false, "denoising_step_list": [1000, 750, 500, 250], "dit_quantized": true, "dit_quantized_ckpt": "lightx2v/Wan2.1-Distill-Models/wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors", "dit_quant_scheme": "fp8-sgl", "parallel": { "seq_p_size": 4, "seq_p_attn_type": "ulysses", "cfg_p_size": 2 } } ``` - `dit_quantized`: enable DIT quantization for the core model. - `dit_quantized_ckpt`: local path to FP8-quantized DIT weights. - `dit_quant_scheme`: quantization scheme, e.g., `fp8-sgl`. ### 2. Start server mode #### 2.1 Single-GPU inference Start the server: ```bash cd LightX2V/scripts/server # Before running, set `lightx2v_path`, `model_path`, and `config_json` appropriately # e.g.: lightx2v_path=/home/user/LightX2V # e.g.: model_path=/home/user/models/Wan-AI/Wan2.1-T2V-14B # e.g.: config_json ${lightx2v_path}/configs/wan/wan_t2v.json bash start_server.sh ``` Send a request from a client terminal: ```bash cd LightX2V/scripts/server # Video endpoint: python post.py ``` Server-side logs will show inference progress. #### 2.2 Single-GPU offload inference Enable offload in the config (see earlier snippet) and restart the server: ```bash cd LightX2V/scripts/server bash start_server.sh ``` Client request: ```bash cd LightX2V/scripts/server python post.py ``` #### 2.3 Multi-GPU parallel inference Start the multi-GPU server: ```bash cd LightX2V/scripts/server bash start_server_cfg_ulysses.sh ``` Client request: ```bash cd LightX2V/scripts/server python post.py ``` Measured runtimes and per-GPU peaks: 1. Single-GPU inference: Run DiT cost 261.699812 seconds; RUN pipeline cost 261.973479 seconds; peak 43968 MiB 2. Single-GPU offload: Run DiT cost 264.445139 seconds; RUN pipeline cost 265.565198 seconds; peak 34932 MiB 3. Multi-GPU parallel: Run DiT cost 109.518894 seconds; RUN pipeline cost 110.085543 seconds; per-GPU peak 44624 MiB `start_server.sh` example: ```bash #!/bin/bash # set path firstly lightx2v_path= model_path= export CUDA_VISIBLE_DEVICES=0 # set environment variables source ${lightx2v_path}/scripts/base/base.sh # Start API server with distributed inference service python -m lightx2v.server \ --model_cls wan2.1 \ --task t2v \ --model_path $model_path \ --config_json ${lightx2v_path}/configs/wan/wan_t2v.json \ --host 0.0.0.0 \ --port 8000 echo "Service stopped" ``` - `--host 0.0.0.0` and `--port 8000` bind the service to port 8000 on all interfaces. `post.py` example: ```python import requests from loguru import logger if __name__ == "__main__": url = "http://localhost:8000/v1/tasks/video/" message = { "prompt": "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage.", "negative_prompt": "camera shake, vivid color tones, overexposure, static, blurred details, subtitles, style marks, artwork, painting-like, still image, overall grayish, worst quality, low quality, JPEG compression artifacts, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn face, deformed, disfigured, malformed limbs, fused fingers, motionless frame, cluttered background, three legs, many people in background, walking backwards", "image_path": "", "seed": 42, "save_result_path": "./cat_boxing_seed42.mp4", } logger.info(f"message: {message}") response = requests.post(url, json=message) logger.info(f"response: {response.json()}") ``` - `url = "http://localhost:8000/v1/tasks/video/"` posts a video generation task. For image tasks use `http://localhost:8000/v1/tasks/image/`. - `message` fields: if `seed` is omitted a random seed is used; if `save_result_path` is omitted the server will save the result with the task ID as filename. ### 3. Generate via Python code #### 3.1 Single-GPU inference ```bash cd LightX2V/examples/wan # Edit `wan_t2v.py` to set `model_path`, `save_result_path`, and `config_json` PYTHONPATH=/home/user/LightX2V python wan_t2v.py ``` Notes: 1. Prefer passing `config_json` to align hyperparameters with script/server runs. 2. `PYTHONPATH` must be an absolute path. #### 3.2 Single-GPU offload inference Enable offload in the config, then: ```bash cd LightX2V/examples/wan PYTHONPATH=/home/user/LightX2V python wan_t2v.py ``` #### 3.3 Multi-GPU parallel inference Edit `wan_t2v.py` to use `LightX2V/configs/dist_infer/wan_t2v_dist_cfg_ulysses.json` and run: ```bash PROFILING_DEBUG_LEVEL=2 PYTHONPATH=/home/user/LightX2V torchrun --nproc_per_node=8 wan_t2v.py ``` Measured runtimes and per-GPU peaks: - Single-GPU: Run DiT cost 262.745393 seconds; RUN pipeline cost 263.279303 seconds; peak 44792 MiB - Single-GPU offload: Run DiT cost 263.725956 seconds; RUN pipeline cost 264.919227 seconds; peak 34936 MiB - Multi-GPU parallel: Run DiT cost 113.736238 seconds; RUN pipeline cost 114.297859 seconds; per-GPU peak 44624 MiB Example `wan_t2v.py`: ```python """ Wan2.1 text-to-video generation example. This example demonstrates how to use LightX2V with Wan2.1 model for T2V generation. """ from lightx2v import LightX2VPipeline # Initialize pipeline for Wan2.1 T2V task pipe = LightX2VPipeline( model_path="/path/to/Wan2.1-T2V-14B", model_cls="wan2.1", task="t2v", ) # Alternative: create generator from config JSON file # pipe.create_generator(config_json="../configs/wan/wan_t2v.json") # Create generator with specified parameters pipe.create_generator( attn_mode="sage_attn2", infer_steps=50, height=480, # Can be set to 720 for higher resolution width=832, # Can be set to 1280 for higher resolution num_frames=81, guidance_scale=5.0, sample_shift=5.0, ) seed = 42 prompt = "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." negative_prompt = "camera shake, vivid color tones, overexposure, static, blurred details, subtitles, style marks, artwork, painting-like, still image, overall grayish, worst quality, low quality, JPEG compression artifacts, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn face, deformed, disfigured, malformed limbs, fused fingers, motionless frame, cluttered background, three legs, many people in background, walking backwards" save_result_path = "/path/to/save_results/output.mp4" pipe.generate( seed=seed, prompt=prompt, negative_prompt=negative_prompt, save_result_path=save_result_path, ) ``` Notes: 1. Update `model_path` and `save_result_path` to actual paths. 2. Prefer passing `config_json` for parameter alignment with script/server runs.