Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • W Wan2.1_pytorch
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 1
    • Issues 1
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • ModelZoo
  • Wan2.1_pytorch
  • Issues
  • #1

Closed
Open
Created Jul 30, 2025 by birdzzh@birdzzh

报错

root@xfy:/home# python generate.py --task t2v-1.3B --size 832480 --ckpt_dir models/Wan2.1-T2V-1.3B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." [2025-07-30 15:31:04,466] INFO: offload_model is not specified, set to True. [2025-07-30 15:31:04,466] INFO: Generation job args: Namespace(task='t2v-1.3B', size='832480', frame_num=81, ckpt_dir='models/Wan2.1-T2V-1.3B', offload_model=True, ulysses_size=1, ring_size=1, t5_fsdp=False, t5_cpu=False, dit_fsdp=False, save_file=None, prompt='Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage.', use_prompt_extend=False, prompt_extend_method='local_qwen', prompt_extend_model=None, prompt_extend_target_lang='ch', base_seed=8531901343496045993, image=None, sample_solver='unipc', sample_steps=50, sample_shift=5.0, sample_guide_scale=5.0) [2025-07-30 15:31:04,466] INFO: Generation model config: {'name': 'Config: Wan T2V 1.3B', 't5_model': 'umt5_xxl', 't5_dtype': torch.bfloat16, 'text_len': 512, 'param_dtype': torch.bfloat16, 'num_train_timesteps': 1000, 'sample_fps': 16, 'sample_neg_prompt': '色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走', 't5_checkpoint': 'models_t5_umt5-xxl-enc-bf16.pth', 't5_tokenizer': 'google/umt5-xxl', 'vae_checkpoint': 'Wan2.1_VAE.pth', 'vae_stride': (4, 8, 8), 'patch_size': (1, 2, 2), 'dim': 1536, 'ffn_dim': 8960, 'freq_dim': 256, 'num_heads': 12, 'num_layers': 30, 'window_size': (-1, -1), 'qk_norm': True, 'cross_attn_norm': True, 'eps': 1e-06} [2025-07-30 15:31:04,467] INFO: Input prompt: Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage. [2025-07-30 15:31:04,467] INFO: Creating WanT2V pipeline. [2025-07-30 15:32:35,603] INFO: loading models/Wan2.1-T2V-1.3B/models_t5_umt5-xxl-enc-bf16.pth [2025-07-30 15:32:51,708] INFO: loading models/Wan2.1-T2V-1.3B/Wan2.1_VAE.pth [2025-07-30 15:32:52,274] INFO: Creating WanModel from models/Wan2.1-T2V-1.3B [2025-07-30 15:32:56,472] WARNING: WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0 with CUDA None (you have 2.3.0) Python 3.10.12 (you have 3.10.12) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details [2025-07-30 15:33:29,582] INFO: Generating video ... 0%| | 0/50 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/generate.py", line 411, in generate(args) File "/home/generate.py", line 313, in generate video = wan_t2v.generate( File "/home/wan/text2video.py", line 236, in generate noise_pred_cond = self.model( File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/wan/modules/model.py", line 564, in forward x = block(x, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/wan/modules/model.py", line 298, in forward y = self.self_attn( File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/home/wan/modules/model.py", line 146, in forward x = flash_attention( File "/home/wan/modules/attention.py", line 113, in flash_attention x = flash_attn.flash_attn_varlen_func( File "/usr/local/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 1125, in flash_attn_varlen_func return FlashAttnVarlenFunc.apply( File "/usr/local/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/usr/local/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 621, in forward out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_varlen_forward( File "/usr/local/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 89, in _flash_attn_varlen_forward out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd( RuntimeError: HIP error: invalid device function HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing AMD_SERIALIZE_KERNEL=3. Compile with TORCH_USE_HIP_DSA to enable device-side assertions.

Assignee
Assign to
Time tracking