- 10 Jan, 2025 2 commits
-
-
Daniel Hipke authored
Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305) * Add no_mmap arg. * Fix arg parsing. * Update another method to force no mmap. * logging * logging2 * propagate no_mmap * logging3 * propagate no_mmap * logging4 * fix open call * clean up logging * cleanup * fix missing arg * update logging and comments * Rename to disable_mmap and update other references. * [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316) Update ltx_video.md to remove generator from `from_pretrained()` * docs: fix a mistake in docstring (#10319) Update pipeline_hunyuan_video.py docs: fix a mistake * [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306) [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float" torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor. in function prepare_latents: audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length) ... audio = initial_audio_waveforms.new_zeros(audio_shape) audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float Co-authored-by:
hlky <hlky@hlky.ac> * [docs] Fix quantization links (#10323) Update overview.md * [Sana]add 2K related model for Sana (#10322) add 2K related model for Sana * Update src/diffusers/loaders/single_file_model.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/loaders/single_file.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * make style --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Leojc <liao_junchao@outlook.com> Co-authored-by:
Aditya Raj <syntaxticsugr@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Junsong Chen <cjs1020440147@icloud.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
hlky authored
* Use Pipelines without unet * unet.config.in_channels * default_sample_size * is_unet_version_less_0_9_0 --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 09 Jan, 2025 4 commits
-
-
Zehuan Huang authored
* Support pass kwargs to cogvideox custom attention processor * remove args in cogvideox attn processor * remove unused kwargs
-
Sayak Paul authored
* factor out text encoder loading. * make fix-copies * remove copied from fuse_lora and unfuse_lora as needed. * remove unused imports
-
Vladimir Mandic authored
* dont assume scheduler has optional config params * make style, make fix-copies * calculate_shift * fix-copies, usage in pipelines --------- Co-authored-by:hlky <hlky@hlky.ac>
-
Steven Liu authored
* fix docstrings * add
-
- 08 Jan, 2025 8 commits
-
-
hlky authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Junsong Chen authored
add 4K support for Sana
-
hlky authored
-
Bagheera authored
* fix for #7365, prevent pipelines from overriding provided prompt embeds * fix-copies * fix implementation * update --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Marc Sun authored
* fix device issue in single gpu case * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
AstraliteHeart authored
* Add support for loading AuraFlow models from GGUF https://huggingface.co/city96/AuraFlow-v0.3-gguf * Update AuraFlow documentation for GGUF, add GGUF tests and model detection. * Address code review comments. * Remove unused config. --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
Junsong Chen authored
change clean_caption from True to False.
-
Aryan authored
* set supports gradient checkpointing to true where necessary; add missing no split modules * fix cogvideox tests * update --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 07 Jan, 2025 5 commits
-
-
hlky authored
* Use pipelines without vae * getattr * vqvae --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
hlky authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
hlky authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aryan authored
* update * fix make copies * update * add relevant markers to the integration test suite. * add copied. * fox-copies * temporarily add print. * directly place on CUDA as CPU isn't that big on the CIO. * fixes to fuse_lora, aryan was right. * fixes --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aryan authored
fix
-
- 06 Jan, 2025 6 commits
-
-
Ameer Azam authored
Regarding the RunwayML path for V1.5 did change to stable-diffusion-v1-5/[stable-diffusion-v1-5/ stable-diffusion-inpainting] (#10476) * Update pipeline_controlnet.py * Update pipeline_controlnet_img2img.py runwayml Take-down so change all from to this stable-diffusion-v1-5/stable-diffusion-v1-5 * Update pipeline_controlnet_inpaint.py * runwayml take-down make change to sd-legacy * runwayml take-down make change to sd-legacy * runwayml take-down make change to sd-legacy * runwayml take-down make change to sd-legacy * Update convert_blipdiffusion_to_diffusers.py style change
-
hlky authored
* Add torch_xla and from_single_file to instruct-pix2pix * StableDiffusionInstructPix2PixPipelineSingleFileSlowTests * StableDiffusionInstructPix2PixPipelineSingleFileSlowTests --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Aryan authored
* fix * add coauthor Co-Authored-By:
Nerogar <nerogar@arcor.de> --------- Co-authored-by:
Nerogar <nerogar@arcor.de>
-
Sayak Paul authored
* fix: lora unloading when using expanded Flux LoRAs. * fix argument name. Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com> * docs. --------- Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com>
-
hlky authored
* LEditsPP - examples, check height/width, add tiling/slicing * make style
-
hlky authored
`lora_bias` PEFT version check in `unet.load_attn_procs` path
-
- 05 Jan, 2025 1 commit
-
-
hlky authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Jan, 2025 7 commits
-
-
hlky authored
Fix AutoPipeline `from_pipe` where source pipeline is missing target pipeline's optional components (#10400) * Optional components in AutoPipeline * missing_modules --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Aryan authored
update
-
Sayak Paul authored
fix attribute adjustment for ltx.
-
Daniel Regado authored
* IP-Adapter support for `StableDiffusion3ControlNetPipeline` * Update src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
G.O.D authored
-
Junsong Chen authored
fix pe bug for Sana Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
maxs-kan authored
* check for base_layer key in transformer state dict * test_lora_expansion_works_for_absent_keys * check * Update tests/lora/test_lora_layers_flux.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * check * test_lora_expansion_works_for_absent_keys/test_lora_expansion_works_for_extra_keys * absent->extra --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Dec, 2024 3 commits
-
-
hlky authored
SD3 pipelines hasattr
-
SahilCarterr authored
[Add] torch_xla support in pipeline_sana.py
-
Alan Ponnachan authored
* Add torch_xla support to pipeline_aura_flow.py * make style --------- Co-authored-by:hlky <hlky@hlky.ac>
-
- 25 Dec, 2024 2 commits
-
-
Sayak Paul authored
* feat: support unload_lora_weights() for Flux Control. * tighten test * minor * updates * meta device fixes.
-
Aryan authored
* Revert "Add support for sharded models when TorchAO quantization is enabled (#10256)" This reverts commit 41ba8c0b . * update tests * udpate * update * update * update device map tests * apply review suggestions * update * make style * fix * update docs * update tests * update workflow * update * improve tests * allclose tolerance * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update tests/quantization/torchao/test_torchao.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * improve tests * fix * update correct slices --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Dec, 2024 2 commits
-
-
Eliseu Silva authored
Make passing the IP Adapter mask to the attention mechanism optional if there is no need to apply it to a given IP Adapter.
-
https://github.com/huggingface/diffusers/pull/10368YiYi Xu authored
* fix bug for torch.uint1-7 not support in torch<2.6 * up --------- Co-authored-by:baymax591 <cbai@mail.nwpu.edu.cn>
-