1. 16 Jan, 2025 6 commits
  2. 15 Jan, 2025 6 commits
  3. 14 Jan, 2025 7 commits
  4. 13 Jan, 2025 11 commits
  5. 12 Jan, 2025 3 commits
  6. 11 Jan, 2025 3 commits
  7. 10 Jan, 2025 4 commits
    • chaowenguo's avatar
      add the xm.mark_step for the first denosing loop (#10530) · d6c030fd
      chaowenguo authored
      
      
      * Update rerender_a_video.py
      
      * Update rerender_a_video.py
      
      * Update examples/community/rerender_a_video.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update rerender_a_video.py
      
      * make style
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      d6c030fd
    • Sayak Paul's avatar
      [CI] Match remaining assertions from big runner (#10521) · 9f06a0d1
      Sayak Paul authored
      * print
      
      * remove print.
      
      * print
      
      * update slice.
      
      * empty
      9f06a0d1
    • Daniel Hipke's avatar
      Add a `disable_mmap` option to the `from_single_file` loader to improve load... · 52c05bd4
      Daniel Hipke authored
      
      Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305)
      
      * Add no_mmap arg.
      
      * Fix arg parsing.
      
      * Update another method to force no mmap.
      
      * logging
      
      * logging2
      
      * propagate no_mmap
      
      * logging3
      
      * propagate no_mmap
      
      * logging4
      
      * fix open call
      
      * clean up logging
      
      * cleanup
      
      * fix missing arg
      
      * update logging and comments
      
      * Rename to disable_mmap and update other references.
      
      * [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316)
      
      Update ltx_video.md to remove generator from `from_pretrained()`
      
      * docs: fix a mistake in docstring (#10319)
      
      Update pipeline_hunyuan_video.py
      
      docs: fix a mistake
      
      * [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306)
      
      [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float"
      
      torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
      
      in function prepare_latents:
      audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length
      audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length)
      ...
      audio = initial_audio_waveforms.new_zeros(audio_shape)
      
      audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * [docs] Fix quantization links (#10323)
      
      Update overview.md
      
      * [Sana]add 2K related model for Sana (#10322)
      
      add 2K related model for Sana
      
      * Update src/diffusers/loaders/single_file_model.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update src/diffusers/loaders/single_file.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * make style
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarLeojc <liao_junchao@outlook.com>
      Co-authored-by: default avatarAditya Raj <syntaxticsugr@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarJunsong Chen <cjs1020440147@icloud.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      52c05bd4
    • Sayak Paul's avatar
      [LoRA] allow big CUDA tests to run properly for LoRA (and others) (#9845) · a6f043a8
      Sayak Paul authored
      
      
      * allow big lora tests to run on the CI.
      
      * print
      
      * print.
      
      * print
      
      * print
      
      * print
      
      * print
      
      * more
      
      * print
      
      * remove print.
      
      * remove print
      
      * directly place on cuda.
      
      * remove pipeline.
      
      * remove
      
      * fix
      
      * fix
      
      * spaces
      
      * quality
      
      * updates
      
      * directly place flux controlnet pipeline on cuda.
      
      * torch_device instead of cuda.
      
      * style
      
      * device placement.
      
      * fixes
      
      * add big gpu marker for mochi; rename test correctly
      
      * address feedback
      
      * fix
      
      ---------
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      a6f043a8