- 21 Oct, 2024 4 commits
-
-
timdalxx authored
* fix the issue on flux dreambooth lora training * update : origin main code * docs: update pipeline_stable_diffusion docstring * docs: update pipeline_stable_diffusion docstring * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * fix: style * fix: style * fix: copies * make fix-copies * remove extra newline --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Chenyu Li authored
Fix type in cogvideo pipeline
-
Sayak Paul authored
* quantization config. * fix-copies * fix * modules_to_not_convert * add bitsandbytes utilities. * make progress. * fixes * quality * up * up rotary embedding refactor 2: update comments, fix dtype for use_real=False (#9312) fix notes and dtype up up * minor * up * up * fix * provide credits where due. * make configurations work. * fixes * fix * update_missing_keys * fix * fix * make it work. * fix * provide credits to transformers. * empty commit * handle to() better. * tests * change to bnb from bitsandbytes * fix tests fix slow quality tests SD3 remark fix complete int4 tests add a readme to the test files. add model cpu offload tests warning test * better safeguard. * change merging status * courtesy to transformers. * move upper. * better * make the unused kwargs warning friendlier. * harmonize changes with https://github.com/huggingface/transformers/pull/33122 * style * trainin tests * feedback part i. * Add Flux inpainting and Flux Img2Img (#9135) --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Update `UNet2DConditionModel`'s error messages (#9230) * refactor [CI] Update Single file Nightly Tests (#9357) * update * update feedback. improve README for flux dreambooth lora (#9290) * improve readme * improve readme * improve readme * improve readme fix one uncaught deprecation warning for accessing vae_latent_channels in VaeImagePreprocessor (#9372) deprecation warning vae_latent_channels add mixed int8 tests and more tests to nf4. [core] Freenoise memory improvements (#9262) * update * implement prompt interpolation * make style * resnet memory optimizations * more memory optimizations; todo: refactor * update * update animatediff controlnet with latest changes * refactor chunked inference changes * remove print statements * update * chunk -> split * remove changes from incorrect conflict resolution * remove changes from incorrect conflict resolution * add explanation of SplitInferenceModule * update docs * Revert "update docs" This reverts commit c55a50a271b2cefa8fe340a4f2a3ab9b9d374ec0. * update docstring for freenoise split inference * apply suggestions from review * add tests * apply suggestions from review quantization docs. docs. * Revert "Add Flux inpainting and Flux Img2Img (#9135)" This reverts commit 5799954dd4b3d753c7c1b8d722941350fe4f62ca. * tests * don * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * contribution guide. * changes * empty * fix tests * harmonize with https://github.com/huggingface/transformers/pull/33546 . * numpy_cosine_distance * config_dict modification. * remove if config comment. * note for load_state_dict changes. * float8 check. * quantizer. * raise an error for non-True low_cpu_mem_usage values when using quant. * low_cpu_mem_usage shenanigans when using fp32 modules. * don't re-assign _pre_quantization_type. * make comments clear. * remove comments. * handle mixed types better when moving to cpu. * add tests to check if we're throwing warning rightly. * better check. * fix 8bit test_quality. * handle dtype more robustly. * better message when keep_in_fp32_modules. * handle dtype casting. * fix dtype checks in pipeline. * fix warning message. * Update src/diffusers/models/modeling_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * mitigate the confusing cpu warning --------- Co-authored-by:
Vishnu V Jaddipal <95531133+Gothos@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Aryan authored
* update * dummy change to trigger CI; will revert * no deps peft * np deps * todo --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 19 Oct, 2024 1 commit
-
-
bonlime authored
* Update textual_inversion.py * add unload test * add comment * fix style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Your Name <you@example.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 17 Oct, 2024 2 commits
-
-
Aryan authored
* update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Linoy Tsaban authored
* add ostris trainer to README & add cache latents of vae * add ostris trainer to README & add cache latents of vae * style * readme * add test for latent caching * add ostris noise scheduler https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95 * style * fix import * style * fix tests * style * --change upcasting of transformer? * update readme according to main * add pivotal tuning for CLIP * fix imports, encode_prompt call,add TextualInversionLoaderMixin to FluxPipeline for inference * TextualInversionLoaderMixin support for FluxPipeline for inference * move changes to advanced flux script, revert canonical * add latent caching to canonical script * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160 * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160 * style * remove redundant line and change code block placement to align with logic * add initializer_token arg * add transformer frac for range support from pure textual inversion to the orig pivotal tuning * support pure textual inversion - wip * adjustments to support pure textual inversion and transformer optimization in only part of the epochs * fix logic when using initializer token * fix pure_textual_inversion_condition * fix ti/pivotal loading of last validation run * remove embeddings loading for ti in final training run (to avoid adding huggingface hub dependency) * support pivotal for t5 * adapt pivotal for T5 encoder * adapt pivotal for T5 encoder and support in flux pipeline * t5 pivotal support + support fo pivotal for clip only or both * fix param chaining * fix param chaining * README first draft * readme * readme * readme * style * fix import * style * add fix from https://github.com/huggingface/diffusers/pull/9419 * add to readme, change function names * te lr changes * readme * change concept tokens logic * fix indices * change arg name * style * dummy test * revert dummy test * reorder pivoting * add warning in case the token abstraction is not the instance prompt * experimental - wip - specific block training * fix documentation and token abstraction processing * remove transformer block specification feature (for now) * style * fix copies * fix indexing issue when --initializer_concept has different amounts * add if TextualInversionLoaderMixin to all flux pipelines * style * fix import * fix imports * address review comments - remove necessary prints & comments, use pin_memory=True, use free_memory utils, unify warning and prints * style * logger info fix * make lora target modules configurable and change the default * make lora target modules configurable and change the default * style * make lora target modules configurable and change the default, add notes to readme * style * add tests * style * fix repo id * add updated requirements for advanced flux * fix indices of t5 pivotal tuning embeddings * fix path in test * remove `pin_memory` * fix filename of embedding * fix filename of embedding --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 16 Oct, 2024 5 commits
-
-
Aryan authored
* update * apply suggestions from review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Aryan authored
* cogvideox-fun control * make style * make fix-copies * karras schedulers * Update src/diffusers/pipelines/cogvideo/pipeline_cogvideox_fun_control.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/cogvideox.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * apply suggestions from review --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Jongho Choi authored
Update peft_utils.py Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* log a warning when there are missing keys in the LoRA loading. * handle missing keys and unexpected keys better. * add tests * fix-copies. * updates * tests * concat warning. * Add Differential Diffusion to Kolors (#9423) * Added diff diff support for kolors img2img * Fized relative imports * Fized relative imports * Added diff diff support for Kolors * Fized import issues * Added map * Fized import issues * Fixed naming issues * Added diffdiff support for Kolors img2img pipeline * Removed example docstrings * Added map input * Updated latents Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Updated `original_with_noise` Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Improved code quality --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * FluxMultiControlNetModel (#9647) * tests * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * fix --------- Co-authored-by:
M Saqlain <118016760+saqlain2204@users.noreply.github.com> Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Charchit Sharma authored
* gatherparams bug * calling context lib object * fix --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 15 Oct, 2024 12 commits
-
-
YiYi Xu authored
* Add support of Xlabs Controlnets --------- Co-authored-by:Anzhella Pankratova <son0shad@gmail.com>
-
Aryan authored
* update * update * update * update * update * add coauthor Co-Authored-By:
yuan-shenghai <963658029@qq.com> * add coauthor Co-Authored-By:
Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com> * update Co-Authored-By:
yuan-shenghai <963658029@qq.com> * update --------- Co-authored-by:
yuan-shenghai <963658029@qq.com> Co-authored-by:
Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com>
-
Ahnjj_DEV authored
* Fix some documentation in ./src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * run make style * make style & fix * make style : 0.1.5 version ruff * revert changes to examples --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
wony617 authored
* [docs] refactoring docstrings in `models/embeddings_flax.py` * Update src/diffusers/models/embeddings_flax.py * make style --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
Jiwook Han authored
* refac: docstrings in training_utils.py * fix: manual edits * run make style * add docstring at cast_training_params --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Charchit Sharma authored
* refactor image_processor file * changes as requested * +1 edits * quality fix * indent issue --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Robin authored
[Fix] when run load pretain with local_files_only, local variable 'cached_folder' referenced before assignment (#9376) Fix local variable 'cached_folder' referenced before assignment in hub_utils.py Fix when use `local_files_only=True` with `subfolder`, local variable 'cached_folder' referenced before assignment issue. Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
hlky authored
* Slight performance improvement to Euler * Slight performance improvement to EDMEuler * Slight performance improvement to FlowMatchHeun * Slight performance improvement to KDPM2Ancestral * Update KDPM2AncestralDiscreteSchedulerTest --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
hlky authored
Refactor SchedulerOutput and add pred_original_sample in `DPMSolverSDE`, `Heun`, `KDPM2Ancestral` and `KDPM2` (#9650) Refactor SchedulerOutput and add pred_original_sample Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
hlky authored
Convert list/tuple of HunyuanDiT2DControlNetModel to HunyuanDiT2DMultiControlNetModel Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
hlky authored
Convert list/tuple of SD3ControlNetModel to SD3MultiControlNetModel Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
hlky authored
Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 14 Oct, 2024 2 commits
-
-
SahilCarterr authored
* add lora
-
Yuxuan.Zhang authored
* merge 9588 * max_shard_size="5GB" for colab running * conversion script updates; modeling test; refactor transformer * make fix-copies * Update convert_cogview3_to_diffusers.py * initial pipeline draft * make style * fight bugs
🐛 🪳 * add example * add tests; refactor * make style * make fix-copies * add co-author YiYi Xu <yixu310@gmail.com> * remove files * add docs * add co-author Co-Authored-By:YiYi Xu <yixu310@gmail.com> * fight docs * address reviews * make style * make model work * remove qkv fusion * remove qkv fusion tets * address review comments * fix make fix-copies error * remove None and TODO * for FP16(draft) * make style * remove dynamic cfg * remove pooled_projection_dim as a parameter * fix tests --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 11 Oct, 2024 1 commit
-
-
hlky authored
-
- 10 Oct, 2024 1 commit
-
-
Subho Ghosh authored
* flux controlnet control_guidance_start and control_guidance_end implement * minor fix - added docstrings, consistent controlnet scale flux and SD3
-
- 09 Oct, 2024 4 commits
-
-
Pakkapon Phongthawee authored
* make controlnet support interrupt * remove white space in controlnet interrupt
-
SahilCarterr authored
* added pag to sd img2img pipeline --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
* allow loras to be loaded with low_cpu_mem_usage. * add flux support but note https://github.com/huggingface/diffusers/pull/9510\#issuecomment-2378316687 * low_cpu_mem_usage. * fix-copies * fix-copies again * tests * _LOW_CPU_MEM_USAGE_DEFAULT_LORA * _peft_version default. * version checks. * version check. * version check. * version check. * require peft 0.13.1. * explicitly specify low_cpu_mem_usage=False. * docs. * transformers version 4.45.2. * update * fix * empty * better name initialize_dummy_state_dict. * doc todos. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * style * fix-copies --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Yijun Lee authored
-
- 08 Oct, 2024 3 commits
-
-
sanaka authored
Fix the bug that `joint_attention_kwargs` is not passed to the FLUX's transformer attention processors (#9517) * Update transformer_flux.py
-
v2ray authored
* Fixed local variable noise_pred_text referenced before assignment when using PAG with guidance scale and guidance rescale at the same time. * Fixed style. * Made returning text pred noise an argument.
-
Sayak Paul authored
* handle dora. * print test * debug * fix * fix-copies * update logits * add warning in the test. * make is_dora check consistent. * fix-copies
-
- 07 Oct, 2024 3 commits
-
-
Eliseu Silva authored
* Fix for use_safetensors parameters, allow use of parameter on loading submodels (#9576)
-
Yijun Lee authored
* refac: docstrings in import_utils.py * Update import_utils.py
-
Clem authored
* fix startswith syntax in xlabs lora conversion * Trigger CI https://github.com/huggingface/diffusers/pull/9581#issuecomment-2395530360
-
- 03 Oct, 2024 1 commit
-
-
YiYi Xu authored
* check size * up
-
- 02 Oct, 2024 1 commit
-
-
Xiangchendong authored
Co-authored-by:Aryan <aryan@huggingface.co>
-