- 16 Jan, 2025 3 commits
-
-
Leo Jiang authored
* NPU adaption for RMSNorm * NPU adaption for RMSNorm --------- Co-authored-by:J石页 <jiangshuo9@h-partners.com>
-
hlky authored
* Move buffers to device * add test * named_buffers
-
Junyu Chen authored
* autoencoder_dc tiling * add tiling and slicing support in SANA pipelines * create variables for padding length because the line becomes too long * add tiling and slicing support in pag SANA pipelines * revert changes to tile size * make style * add vae tiling test * fix SanaMultiscaleLinearAttention apply_quadratic_attention bf16 --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 14 Jan, 2025 2 commits
-
-
Marc Sun authored
* load and save dduf archive * style * switch to zip uncompressed * updates * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * first draft * remove print * switch to dduf_file for consistency * switch to huggingface hub api * fix log * add a basic test * Update src/diffusers/configuration_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix * fix variant * change saving logic * DDUF - Load transformers components manually (#10171) * update hfh version * Load transformers components manually * load encoder from_pretrained with state_dict * working version with transformers and tokenizer ! * add generation_config case * fix tests * remove saving for now * typing * need next version from transformers * Update src/diffusers/configuration_utils.py Co-authored-by:
Lucain <lucain@huggingface.co> * check path corectly * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * udapte * typing * remove check for subfolder * quality * revert setup changes * oups * more readable condition * add loading from the hub test * add basic docs. * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * add example * add * make functions private * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * minor. * fixes * fix * change the precdence of parameterized. * error out when custom pipeline is passed with dduf_file. * updates * fix * updates * fixes * updates * fix xfail condition. * fix xfail * fixes * sharded checkpoint compat * add test for sharded checkpoint * add suggestions * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * from suggestions * add class attributes to flag dduf tests * last one * fix logic * remove comment * revert changes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Lucain <lucain@huggingface.co> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
hlky authored
-
- 13 Jan, 2025 1 commit
-
-
Vinh H. Pham authored
* add framewise decode * add framewise encode, refactor tiled encode/decode * add sanity test tiling for ltx * run make style * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> --------- Co-authored-by:
Pham Hong Vinh <vinhph3@vng.com.vn> Co-authored-by:
Aryan <contact.aryanvs@gmail.com>
-
- 11 Jan, 2025 1 commit
-
-
Junyu Chen authored
* autoencoder_dc tiling * add tiling and slicing support in SANA pipelines * create variables for padding length because the line becomes too long * add tiling and slicing support in pag SANA pipelines * revert changes to tile size * make style * add vae tiling test --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 10 Jan, 2025 1 commit
-
-
Daniel Hipke authored
Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305) * Add no_mmap arg. * Fix arg parsing. * Update another method to force no mmap. * logging * logging2 * propagate no_mmap * logging3 * propagate no_mmap * logging4 * fix open call * clean up logging * cleanup * fix missing arg * update logging and comments * Rename to disable_mmap and update other references. * [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316) Update ltx_video.md to remove generator from `from_pretrained()` * docs: fix a mistake in docstring (#10319) Update pipeline_hunyuan_video.py docs: fix a mistake * [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306) [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float" torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor. in function prepare_latents: audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length) ... audio = initial_audio_waveforms.new_zeros(audio_shape) audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float Co-authored-by:
hlky <hlky@hlky.ac> * [docs] Fix quantization links (#10323) Update overview.md * [Sana]add 2K related model for Sana (#10322) add 2K related model for Sana * Update src/diffusers/loaders/single_file_model.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/loaders/single_file.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * make style --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Leojc <liao_junchao@outlook.com> Co-authored-by:
Aditya Raj <syntaxticsugr@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Junsong Chen <cjs1020440147@icloud.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 09 Jan, 2025 1 commit
-
-
Zehuan Huang authored
* Support pass kwargs to cogvideox custom attention processor * remove args in cogvideox attn processor * remove unused kwargs
-
- 08 Jan, 2025 4 commits
-
-
hlky authored
-
Marc Sun authored
* fix device issue in single gpu case * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
AstraliteHeart authored
* Add support for loading AuraFlow models from GGUF https://huggingface.co/city96/AuraFlow-v0.3-gguf * Update AuraFlow documentation for GGUF, add GGUF tests and model detection. * Address code review comments. * Remove unused config. --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
Aryan authored
* set supports gradient checkpointing to true where necessary; add missing no split modules * fix cogvideox tests * update --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 07 Jan, 2025 1 commit
-
-
hlky authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Jan, 2025 2 commits
-
-
Ameer Azam authored
Regarding the RunwayML path for V1.5 did change to stable-diffusion-v1-5/[stable-diffusion-v1-5/ stable-diffusion-inpainting] (#10476) * Update pipeline_controlnet.py * Update pipeline_controlnet_img2img.py runwayml Take-down so change all from to this stable-diffusion-v1-5/stable-diffusion-v1-5 * Update pipeline_controlnet_inpaint.py * runwayml take-down make change to sd-legacy * runwayml take-down make change to sd-legacy * runwayml take-down make change to sd-legacy * runwayml take-down make change to sd-legacy * Update convert_blipdiffusion_to_diffusers.py style change
-
Aryan authored
* fix * add coauthor Co-Authored-By:
Nerogar <nerogar@arcor.de> --------- Co-authored-by:
Nerogar <nerogar@arcor.de>
-
- 02 Jan, 2025 3 commits
-
-
Aryan authored
update
-
G.O.D authored
-
Junsong Chen authored
fix pe bug for Sana Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 25 Dec, 2024 1 commit
-
-
Aryan authored
* Revert "Add support for sharded models when TorchAO quantization is enabled (#10256)" This reverts commit 41ba8c0b . * update tests * udpate * update * update * update device map tests * apply review suggestions * update * make style * fix * update docs * update tests * update workflow * update * improve tests * allclose tolerance * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update tests/quantization/torchao/test_torchao.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * improve tests * fix * update correct slices --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Dec, 2024 1 commit
-
-
Eliseu Silva authored
Make passing the IP Adapter mask to the attention mechanism optional if there is no need to apply it to a given IP Adapter.
-
- 23 Dec, 2024 7 commits
-
-
Aryan authored
* update * make style * update * update * update * make style * single file related changes * update * fix * update single file urls and docs * update * fix
-
Aryan authored
* rename blocks and docs * fix docs --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
Aryan authored
fix
-
Thien Tran authored
Add missing `.shape`
-
Junsong Chen authored
* fix the Positinoal Embedding bug in 2K model; * Change the default model to the BF16 one for more stable training and output * make style * substract buffer size * add compute_module_persistent_sizes --------- Co-authored-by:yiyixuxu <yixu310@gmail.com>
-
Dhruv Nair authored
* update * Update src/diffusers/loaders/single_file_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
YiYi Xu authored
add: q
-
- 21 Dec, 2024 1 commit
-
-
hlky authored
* Flux IP-Adapter * test cfg * make style * temp remove copied from * fix test * fix test * v2 * fix * make style * temp remove copied from * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Move encoder_hid_proj to inside FluxTransformer2DModel * merge * separate encode_prompt, add copied from, image_encoder offload * make * fix test * fix * Update src/diffusers/pipelines/flux/pipeline_flux.py * test_flux_prompt_embeds change not needed * true_cfg -> true_cfg_scale * fix merge conflict * test_flux_ip_adapter_inference * add fast test * FluxIPAdapterMixin not test mixin * Update pipeline_flux.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 20 Dec, 2024 4 commits
-
-
Aryan authored
contiguous tensors in resnet Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Aryan authored
* add sharded + device_map check
-
Daniel Regado authored
* Added support for single IPAdapter on SD3.5 pipeline --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
dg845 authored
* Port UNet2DModel gradient checkpointing code from #6718. --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Vincent Neemie <92559302+VincentNeemie@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 19 Dec, 2024 5 commits
-
-
djm authored
-
Dhruv Nair authored
update
-
Dhruv Nair authored
update
-
Shenghai Yuan authored
* 1217 * 1217 * 1217 * update * reverse * add test * update test * make style * update * make style --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
Aryan authored
* update * udpate * fix test
-
- 18 Dec, 2024 2 commits
-
-
Aryan authored
fix joint pos embedding device
-
Qin Zhou authored
* Support pass kwargs to sd3 custom attention processor --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-