- 08 Jul, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 13 Jun, 2025 1 commit
-
-
Sayak Paul authored
* feat: parse metadata from lora state dicts. * tests * fix tests * key renaming * fix * smol update * smol updates * load metadata. * automatically save metadata in save_lora_adapter. * propagate changes. * changes * add test to models too. * tigher tests. * updates * fixes * rename tests. * sorted. * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * review suggestions. * removeprefix. * propagate changes. * fix-copies * sd * docs. * fixes * get review ready. * one more test to catch error. * change to a different approach. * fix-copies. * todo * sd3 * update * revert changes in get_peft_kwargs. * update * fixes * fixes * simplify _load_sft_state_dict_metadata * update * style fix * uipdate * update * update * empty commit * _pack_dict_with_prefix * update * TODO 1. * todo: 2. * todo: 3. * update * update * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * reraise. * move argument. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 11 Jun, 2025 2 commits
-
-
Sayak Paul authored
* start adding compilation tests for quantization. * fixes * make common utility. * modularize. * add group offloading+compile * xfail * update * Update tests/quantization/test_torch_compile_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * fixes --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Yao Matrix authored
* enable torchao cases on XPU Signed-off-by:
Matrix YAO <matrix.yao@intel.com> * device agnostic APIs Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * more Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * fix style Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * enable test_torch_compile_recompilation_and_graph_break on XPU Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * resolve comments Signed-off-by:
YAO Matrix <matrix.yao@intel.com> --------- Signed-off-by:
Matrix YAO <matrix.yao@intel.com> Signed-off-by:
YAO Matrix <matrix.yao@intel.com>
-
- 26 May, 2025 1 commit
-
-
Yao Matrix authored
* enable several pipeline integration tests on xpu Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * fix style Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * update per comments Signed-off-by:
Matrix Yao <matrix.yao@intel.com> --------- Signed-off-by:
Yao Matrix <matrix.yao@intel.com> Signed-off-by:
Matrix Yao <matrix.yao@intel.com>
-
- 19 May, 2025 1 commit
-
-
Yao Matrix authored
enhance value guard Signed-off-by:Matrix Yao <matrix.yao@intel.com>
-
- 09 May, 2025 1 commit
-
-
Sayak Paul authored
* feat: pipeline-level quant config. Co-authored-by:
SunMarc <marc.sun@hotmail.fr> condition better. support mapping. improvements. [Quantization] Add Quanto backend (#10756) * update * updaet * update * update * update * update * update * update * update * update * update * update * Update docs/source/en/quantization/quanto.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/quantizers/quanto/utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> [Single File] Add single file loading for SANA Transformer (#10947) * added support for from_single_file * added diffusers mapping script * added testcase * bug fix * updated tests * corrected code quality * corrected code quality --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> [LoRA] Improve warning messages when LoRA loading becomes a no-op (#10187) * updates * updates * updates * updates * notebooks revert * fix-copies. * seeing * fix * revert * fixes * fixes * fixes * remove print * fix * conflicts ii. * updates * fixes * better filtering of prefix. --------- Co-authored-by:
hlky <hlky@hlky.ac> [LoRA] CogView4 (#10981) * update * make fix-copies * update [Tests] improve quantization tests by additionally measuring the inference memory savings (#11021) * memory usage tests * fixes * gguf [`Research Project`] Add AnyText: Multilingual Visual Text Generation And Editing (#8998) * Add initial template * Second template * feat: Add TextEmbeddingModule to AnyTextPipeline * feat: Add AuxiliaryLatentModule template to AnyTextPipeline * Add bert tokenizer from the anytext repo for now * feat: Update AnyTextPipeline's modify_prompt method This commit adds improvements to the modify_prompt method in the AnyTextPipeline class. The method now handles special characters and replaces selected string prompts with a placeholder. Additionally, it includes a check for Chinese text and translation using the trans_pipe. * Fill in the `forward` pass of `AuxiliaryLatentModule` * `make style && make quality` * `chore: Update bert_tokenizer.py with a TODO comment suggesting the use of the transformers library` * Update error handling to raise and logging * Add `create_glyph_lines` function into `TextEmbeddingModule` * make style * Up * Up * Up * Up * Remove several comments * refactor: Remove ControlNetConditioningEmbedding and update code accordingly * Up * Up * up * refactor: Update AnyTextPipeline to include new optional parameters * up * feat: Add OCR model and its components * chore: Update `TextEmbeddingModule` to include OCR model components and dependencies * chore: Update `AuxiliaryLatentModule` to include VAE model and its dependencies for masked image in the editing task * `make style` * refactor: Update `AnyTextPipeline`'s docstring * Update `AuxiliaryLatentModule` to include info dictionary so that text processing is done once * simplify * `make style` * Converting `TextEmbeddingModule` to ordinary `encode_prompt()` function * Simplify for now * `make style` * Up * feat: Add scripts to convert AnyText controlnet to diffusers * `make style` * Fix: Move glyph rendering to `TextEmbeddingModule` from `AuxiliaryLatentModule` * make style * Up * Simplify * Up * feat: Add safetensors module for loading model file * Fix device issues * Up * Up * refactor: Simplify * refactor: Simplify code for loading models and handling data types * `make style` * refactor: Update to() method in FrozenCLIPEmbedderT3 and TextEmbeddingModule * refactor: Update dtype in embedding_manager.py to match proj.weight * Up * Add attribution and adaptation information to pipeline_anytext.py * Update usage example * Will refactor `controlnet_cond_embedding` initialization * Add `AnyTextControlNetConditioningEmbedding` template * Refactor organization * style * style * Move custom blocks from `AuxiliaryLatentModule` to `AnyTextControlNetConditioningEmbedding` * Follow one-file policy * style * [Docs] Update README and pipeline_anytext.py to use AnyTextControlNetModel * [Docs] Update import statement for AnyTextControlNetModel in pipeline_anytext.py * [Fix] Update import path for ControlNetModel, ControlNetOutput in anytext_controlnet.py * Refactor AnyTextControlNet to use configurable conditioning embedding channels * Complete control net conditioning embedding in AnyTextControlNetModel * up * [FIX] Ensure embeddings use correct device in AnyTextControlNetModel * up * up * style * [UPDATE] Revise README and example code for AnyTextPipeline integration with DiffusionPipeline * [UPDATE] Update example code in anytext.py to use correct font file and improve clarity * down * [UPDATE] Refactor BasicTokenizer usage to a new Checker class for text processing * update pillow * [UPDATE] Remove commented-out code and unnecessary docstring in anytext.py and anytext_controlnet.py for improved clarity * [REMOVE] Delete frozen_clip_embedder_t3.py as it is in the anytext.py file * [UPDATE] Replace edict with dict for configuration in anytext.py and RecModel.py for consistency *
🆙 * style * [UPDATE] Revise README.md for clarity, remove unused imports in anytext.py, and add author credits in anytext_controlnet.py * style * Update examples/research_projects/anytext/README.md Co-authored-by:Aryan <contact.aryanvs@gmail.com> * Remove commented-out image preparation code in AnyTextPipeline * Remove unnecessary blank line in README.md [Quantization] Allow loading TorchAO serialized Tensor objects with torch>=2.6 (#11018) * update * update * update * update * update * update * update * update * update fix: mixture tiling sdxl pipeline - adjust gerating time_ids & embeddings (#11012) small fix on generating time_ids & embeddings [LoRA] support wan i2v loras from the world. (#11025) * support wan i2v loras from the world. * remove copied from. * upates * add lora. Fix SD3 IPAdapter feature extractor (#11027) chore: fix help messages in advanced diffusion examples (#10923) Fix missing **kwargs in lora_pipeline.py (#11011) * Update lora_pipeline.py * Apply style fixes * fix-copies --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Fix for multi-GPU WAN inference (#10997) Ensure that hidden_state and shift/scale are on the same device when running with multiple GPUs Co-authored-by: Jimmy <39@
🇺🇸 .com> [Refactor] Clean up import utils boilerplate (#11026) * update * update * update Use `output_size` in `repeat_interleave` (#11030) [hybrid inference🍯 🐝 ] Add VAE encode (#11017) * [hybrid inference🍯 🐝 ] Add VAE encode * _toctree: add vae encode * Add endpoints, tests * vae_encode docs * vae encode benchmarks * api reference * changelog * Update docs/source/en/hybrid_inference/overview.md Co-authored-by:Sayak Paul <spsayakpaul@gmail.com> * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Wan Pipeline scaling fix, type hint warning, multi generator fix (#11007) * Wan Pipeline scaling fix, type hint warning, multi generator fix * Apply suggestions from code review [LoRA] change to warning from info when notifying the users about a LoRA no-op (#11044) * move to warning. * test related changes. Rename Lumina(2)Text2ImgPipeline -> Lumina(2)Pipeline (#10827) * Rename Lumina(2)Text2ImgPipeline -> Lumina(2)Pipeline --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> making ```formatted_images``` initialization compact (#10801) compact writing Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Fix aclnnRepeatInterleaveIntWithDim error on NPU for get_1d_rotary_pos_embed (#10820) * get_1d_rotary_pos_embed support npu * Update src/diffusers/models/embeddings.py --------- Co-authored-by:
Kai zheng <kaizheng@KaideMacBook-Pro.local> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com> [Tests] restrict memory tests for quanto for certain schemes. (#11052) * restrict memory tests for quanto for certain schemes. * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * fixes * style --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> [LoRA] feat: support non-diffusers wan t2v loras. (#11059) feat: support non-diffusers wan t2v loras. [examples/controlnet/train_controlnet_sd3.py] Fixes #11050 - Cast prompt_embeds and pooled_prompt_embeds to weight_dtype to prevent dtype mismatch (#11051) Fix: dtype mismatch of prompt embeddings in sd3 controlnet training Co-authored-by:
Andreas Jörg <andreasjoerg@MacBook-Pro-von-Andreas-2.fritz.box> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> reverts accidental change that removes attn_mask in attn. Improves fl… (#11065) reverts accidental change that removes attn_mask in attn. Improves flux ptxla by using flash block sizes. Moves encoding outside the for loop. Co-authored-by:
Juan Acevedo <jfacevedo@google.com> Fix deterministic issue when getting pipeline dtype and device (#10696) Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> [Tests] add requires peft decorator. (#11037) * add requires peft decorator. * install peft conditionally. * conditional deps. Co-authored-by:
DN6 <dhruv.nair@gmail.com> --------- Co-authored-by:
DN6 <dhruv.nair@gmail.com> CogView4 Control Block (#10809) * cogview4 control training --------- Co-authored-by:
OleehyO <leehy0357@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com> [CI] pin transformers version for benchmarking. (#11067) pin transformers version for benchmarking. updates Fix Wan I2V Quality (#11087) * fix_wan_i2v_quality * Update src/diffusers/pipelines/wan/pipeline_wan_i2v.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/wan/pipeline_wan_i2v.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/wan/pipeline_wan_i2v.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update pipeline_wan_i2v.py --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac> LTX 0.9.5 (#10968) * update --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac> make PR GPU tests conditioned on styling. (#11099) Group offloading improvements (#11094) update Fix pipeline_flux_controlnet.py (#11095) * Fix pipeline_flux_controlnet.py * Fix style update readme instructions. (#11096) Co-authored-by:
Juan Acevedo <jfacevedo@google.com> Resolve stride mismatch in UNet's ResNet to support Torch DDP (#11098) Modify UNet's ResNet implementation to resolve stride mismatch in Torch's DDP Fix Group offloading behaviour when using streams (#11097) * update * update Quality options in `export_to_video` (#11090) * Quality options in `export_to_video` * make style improve more. add placeholders for docstrings. formatting. smol fix. solidify validation and annotation * Revert "feat: pipeline-level quant config." This reverts commit 316ff46b7648bfa24525ac02c284afcf440404aa. * feat: implement pipeline-level quantization config Co-authored-by:
SunMarc <marc@huggingface.co> * update * fixes * fix validation. * add tests and other improvements. * add tests * import quality * remove prints. * add docs. * fixes to docs. * doc fixes. * doc fixes. * add validation to the input quantization_config. * clarify recommendations. * docs * add to ci. * todo. --------- Co-authored-by:
SunMarc <marc@huggingface.co>
-
- 28 Apr, 2025 1 commit
-
-
Yao Matrix authored
* enable test_layerwise_casting_memory cases on XPU Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * fix style Signed-off-by:
Yao Matrix <matrix.yao@intel.com> --------- Signed-off-by:
Yao Matrix <matrix.yao@intel.com>
-
- 09 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update
-
- 08 Apr, 2025 1 commit
-
-
Yao Matrix authored
* add arch specfic expectations support, to support different arch's numerical characteristics Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * fix typo Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * Apply suggestions from code review * Apply style fixes * Update src/diffusers/utils/testing_utils.py --------- Signed-off-by:
YAO Matrix <matrix.yao@intel.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 04 Apr, 2025 1 commit
-
-
Kenneth Gerald Hamilton authored
* Fixed requests.get function call by adding timeout parameter. * declare DIFFUSERS_REQUEST_TIMEOUT in constants and import when needed * remove unneeded os import * Apply style fixes --------- Co-authored-by:
Sai-Suraj-27 <sai.suraj.27.729@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 02 Apr, 2025 1 commit
-
-
Yao Matrix authored
Signed-off-by:YAO Matrix <matrix.yao@intel.com>
-
- 20 Mar, 2025 1 commit
-
-
Fanli Lin authored
* enable bnb on xpu * add 2 more cases * add missing change * add missing change * add one more * enable cuda only tests on xpu * enable big gpu cases
-
- 19 Mar, 2025 1 commit
-
-
Fanli Lin authored
* enable bnb on xpu * add 2 more cases * add missing change * add missing change * add one more
-
- 14 Mar, 2025 1 commit
-
-
Sayak Paul authored
* restrict memory tests for quanto for certain schemes. * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * fixes * style --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 21 Jan, 2025 1 commit
-
-
Fanli Lin authored
* initial comit * fix empty cache * fix one more * fix style * update device functions * update * update * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * with gc.collect * update * make style * check_torch_dependencies * add mps empty cache * bug fix * Apply suggestions from code review --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
- 14 Jan, 2025 1 commit
-
-
Marc Sun authored
* load and save dduf archive * style * switch to zip uncompressed * updates * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * first draft * remove print * switch to dduf_file for consistency * switch to huggingface hub api * fix log * add a basic test * Update src/diffusers/configuration_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix * fix variant * change saving logic * DDUF - Load transformers components manually (#10171) * update hfh version * Load transformers components manually * load encoder from_pretrained with state_dict * working version with transformers and tokenizer ! * add generation_config case * fix tests * remove saving for now * typing * need next version from transformers * Update src/diffusers/configuration_utils.py Co-authored-by:
Lucain <lucain@huggingface.co> * check path corectly * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * udapte * typing * remove check for subfolder * quality * revert setup changes * oups * more readable condition * add loading from the hub test * add basic docs. * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * add example * add * make functions private * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * minor. * fixes * fix * change the precdence of parameterized. * error out when custom pipeline is passed with dduf_file. * updates * fix * updates * fixes * updates * fix xfail condition. * fix xfail * fixes * sharded checkpoint compat * add test for sharded checkpoint * add suggestions * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * from suggestions * add class attributes to flag dduf tests * last one * fix logic * remove comment * revert changes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Lucain <lucain@huggingface.co> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 23 Dec, 2024 1 commit
-
-
Aryan authored
* bump min torchao version to 0.7.0 * update
-
- 17 Dec, 2024 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/quantizers/gguf/utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update * update * update * update * update * update * update * update * update * Update docs/source/en/quantization/gguf.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update * update * update * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 16 Dec, 2024 1 commit
-
-
Aryan authored
* torchao quantizer --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 22 Nov, 2024 1 commit
-
-
Fanli Lin authored
* enable on xpu * add 1 more * add one more * enable more * add 1 more * add more * enable 1 * enable more cases * enable * enable * update comment * one more * enable 1 * add more cases * enable xpu * add one more caswe * add more cases * add 1 * add more * add more cases * add case * enable * add more * add more * add more * enbale more * add more * update code * update test marker * add skip back * update comment * remove single files * remove * style * add * revert * reformat * update decorator * update * update * update * Update tests/pipelines/deepfloyd_if/test_if.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update tests/pipelines/animatediff/test_animatediff_controlnet.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update tests/pipelines/animatediff/test_animatediff.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update tests/pipelines/animatediff/test_animatediff_controlnet.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * update float16 * no unitest.skipt * update * apply style check * reapply format --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 31 Oct, 2024 1 commit
-
-
Sayak Paul authored
* add a marker for big gpu tests * update * trigger on PRs temporarily. * onnx * fix * total memory * fixes * reduce memory threshold. * bigger gpu * empty * g6e * Apply suggestions from code review * address comments. * fix * fix * fix * fix * fix * okay * further reduce. * updates * remove * updates * updates * updates * updates * fixes * fixes * updates. * fix * workflow fixes. --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 23 Oct, 2024 1 commit
-
-
Fanli Lin authored
fix bug
-
- 21 Oct, 2024 1 commit
-
-
Sayak Paul authored
* quantization config. * fix-copies * fix * modules_to_not_convert * add bitsandbytes utilities. * make progress. * fixes * quality * up * up rotary embedding refactor 2: update comments, fix dtype for use_real=False (#9312) fix notes and dtype up up * minor * up * up * fix * provide credits where due. * make configurations work. * fixes * fix * update_missing_keys * fix * fix * make it work. * fix * provide credits to transformers. * empty commit * handle to() better. * tests * change to bnb from bitsandbytes * fix tests fix slow quality tests SD3 remark fix complete int4 tests add a readme to the test files. add model cpu offload tests warning test * better safeguard. * change merging status * courtesy to transformers. * move upper. * better * make the unused kwargs warning friendlier. * harmonize changes with https://github.com/huggingface/transformers/pull/33122 * style * trainin tests * feedback part i. * Add Flux inpainting and Flux Img2Img (#9135) --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Update `UNet2DConditionModel`'s error messages (#9230) * refactor [CI] Update Single file Nightly Tests (#9357) * update * update feedback. improve README for flux dreambooth lora (#9290) * improve readme * improve readme * improve readme * improve readme fix one uncaught deprecation warning for accessing vae_latent_channels in VaeImagePreprocessor (#9372) deprecation warning vae_latent_channels add mixed int8 tests and more tests to nf4. [core] Freenoise memory improvements (#9262) * update * implement prompt interpolation * make style * resnet memory optimizations * more memory optimizations; todo: refactor * update * update animatediff controlnet with latest changes * refactor chunked inference changes * remove print statements * update * chunk -> split * remove changes from incorrect conflict resolution * remove changes from incorrect conflict resolution * add explanation of SplitInferenceModule * update docs * Revert "update docs" This reverts commit c55a50a271b2cefa8fe340a4f2a3ab9b9d374ec0. * update docstring for freenoise split inference * apply suggestions from review * add tests * apply suggestions from review quantization docs. docs. * Revert "Add Flux inpainting and Flux Img2Img (#9135)" This reverts commit 5799954dd4b3d753c7c1b8d722941350fe4f62ca. * tests * don * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * contribution guide. * changes * empty * fix tests * harmonize with https://github.com/huggingface/transformers/pull/33546 . * numpy_cosine_distance * config_dict modification. * remove if config comment. * note for load_state_dict changes. * float8 check. * quantizer. * raise an error for non-True low_cpu_mem_usage values when using quant. * low_cpu_mem_usage shenanigans when using fp32 modules. * don't re-assign _pre_quantization_type. * make comments clear. * remove comments. * handle mixed types better when moving to cpu. * add tests to check if we're throwing warning rightly. * better check. * fix 8bit test_quality. * handle dtype more robustly. * better message when keep_in_fp32_modules. * handle dtype casting. * fix dtype checks in pipeline. * fix warning message. * Update src/diffusers/models/modeling_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * mitigate the confusing cpu warning --------- Co-authored-by:
Vishnu V Jaddipal <95531133+Gothos@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 09 Oct, 2024 1 commit
-
-
Sayak Paul authored
* allow loras to be loaded with low_cpu_mem_usage. * add flux support but note https://github.com/huggingface/diffusers/pull/9510\#issuecomment-2378316687 * low_cpu_mem_usage. * fix-copies * fix-copies again * tests * _LOW_CPU_MEM_USAGE_DEFAULT_LORA * _peft_version default. * version checks. * version check. * version check. * version check. * require peft 0.13.1. * explicitly specify low_cpu_mem_usage=False. * docs. * transformers version 4.45.2. * update * fix * empty * better name initialize_dummy_state_dict. * doc todos. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * style * fix-copies --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 02 Oct, 2024 1 commit
-
-
Darren Hsu authored
* Support bfloat16 for Upsample2D * Add test and use is_torch_version * Resolve comments and add decorator * Simplify require_torch_version_greater_equal decorator * Run make style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 26 Jun, 2024 1 commit
-
-
Dhruv Nair authored
* update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 May, 2024 1 commit
-
-
Isamu Isozaki authored
* Init commit * Removed einops * Added default movq config for training * Update explanation of prompts * Fixed inheritance of discriminator and init_tracker * Fixed incompatible api between muse and here * Fixed output * Setup init training * Basic structure done * Removed attention for quick tests * Style fixes * Fixed vae/vqgan styles * Removed redefinition of wandb * Fixed log_validation and tqdm * Nothing commit * Added commit loss to lookup_from_codebook * Update src/diffusers/models/vq_model.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Adding perliminary README * Fixed one typo * Local changes * Fixed main issues * Merging * Update src/diffusers/models/vq_model.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Testing+Fixed bugs in training script * Some style fixes * Added wandb to docs * Fixed timm test * get testing suite ready. * remove return loss * remove return_loss * Remove diffs * Remove diffs * fix ruff format --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 10 Apr, 2024 1 commit
-
-
Sayak Paul authored
* get device <-> component mapping when using multiple gpus. * condition the device_map bits. * relax condition * device_map progress. * device_map enhancement * some cleaning up and debugging * Apply suggestions from code review Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> * incorporate suggestions from PR. * remove multi-gpu condition for now. * guard check the component -> device mapping * fix: device_memory variable * dispatching transformers model to have force_hooks=True * better guarding for transformers device_map * introduce support balanced_low_memory and balanced_ultra_low_memory. * remove device_map patch. * fix: intermediate variable scoping. * fix: condition in cpu offload. * fix: flax class restrictions. * remove modifications from cpu_offload and model_offload * incorporate changes. * add a simple forward pass test * add: torch_device in get_inputs() * add: tests * remove print * safe-guard to(), model offloading and cpu offloading when balanced is used as a device_map. * style * remove . * safeguard device_map with more checks and remove invalid device_mapping strategues. * make a class attribute and adjust tests accordingly. * fix device_map check * fix test * adjust comment * fix: device_map attribute * fix: dispatching. * max_memory test for pipeline * version guard the tests * fix guard. * address review feedback. * reset_device_map method. * add: test for reset_hf_device_map * fix a couple things. * add reset_device_map() in the error message. * add tests for checking reset_device_map doesn't have unintended consequences. * fix reset_device_map and offloading tests. * create _get_final_device_map utility. * hf_device_map -> _hf_device_map * add documentation * add notes suggested by Marc. * styling. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * move updates within gpu condition. * other docs related things * note on ignore a device not specified in . * provide a suggestion if device mapping errors out. * fix: typo. * _hf_device_map -> hf_device_map * Empty-Commit * add: example hf_device_map. --------- Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 02 Apr, 2024 1 commit
-
-
Sayak Paul authored
* add: utility to format our docs too
📜 * debugging saga * fix: message * checking * should be fixed. * revert pipeline_fixture * remove empty line * make style * fix: setup.py * style.
-
- 29 Mar, 2024 1 commit
-
-
Sayak Paul authored
* speed up test_vae_slicing in animatediff * speed up test_karras_schedulers_shape for attend and excite. * style. * get the static slices out. * specify torch print options. * modify * test run with controlnet * specify kwarg * fix: things * not None * flatten * controlnet img2img * complete controlet sd * finish more * finish more * finish more * finish more * finish the final batch * add cpu check for expected_pipe_slice. * finish the rest * remove print * style * fix ssd1b controlnet test * checking ssd1b * disable the test. * make the test_ip_adapter_single controlnet test more robust * fix: simple inpaint * multi * disable panorama * enable again * panorama is shaky so leave it for now * remove print * raise tolerance.
-
- 26 Mar, 2024 1 commit
-
-
Sayak Paul authored
skip dynamo tests when python is 3.12.
-
- 25 Mar, 2024 2 commits
-
-
Dhruv Nair authored
update
-
Sayak Paul authored
* strtobool * replace Command from setuptools.
-
- 05 Feb, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 03 Feb, 2024 1 commit
-
-
Stephen authored
change path to pathlib as_posix Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Feb, 2024 1 commit
-
-
UmerHA authored
* Update testing_utils.py * Update testing_utils.py
-
- 18 Jan, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 16 Jan, 2024 1 commit
-
-
Charchit Sharma authored
* changes in utils * removed loc
-
- 26 Dec, 2023 1 commit
-
-
Younes Belkada authored
* add adapter_name in fuse * add tesrt * up * fix CI * adapt from suggestion * Update src/diffusers/utils/testing_utils.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * change to `require_peft_version_greater` * change variable names in test * Update src/diffusers/loaders/lora.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * break into 2 lines * final comments --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-