1. 18 Aug, 2023 1 commit
  2. 17 Aug, 2023 4 commits
  3. 15 Aug, 2023 1 commit
  4. 11 Aug, 2023 1 commit
  5. 04 Aug, 2023 2 commits
  6. 02 Aug, 2023 2 commits
    • Sayak Paul's avatar
      [Feat] add tiny Autoencoder for (almost) instant decoding (#4384) · 18fc40c1
      Sayak Paul authored
      
      
      * add: model implementation of tiny autoencoder.
      
      * add: inits.
      
      * push the latest devs.
      
      * add: conversion script and finish.
      
      * add: scaling factor args.
      
      * debugging
      
      * fix denormalization.
      
      * fix: positional argument.
      
      * handle use_torch_2_0_or_xformers.
      
      * handle post_quant_conv
      
      * handle dtype
      
      * fix: sdxl image processor for tiny ae.
      
      * fix: sdxl image processor for tiny ae.
      
      * unify upcasting logic.
      
      * copied from madness.
      
      * remove trailing whitespace.
      
      * set is_tiny_vae = False
      
      * address PR comments.
      
      * change to AutoencoderTiny
      
      * make act_fn an str throughout
      
      * fix: apply_forward_hook decorator call
      
      * get rid of the special is_tiny_vae flag.
      
      * directly scale the output.
      
      * fix dummies?
      
      * fix: act_fn.
      
      * get rid of the Clamp() layer.
      
      * bring back copied from.
      
      * movement of the blocks to appropriate modules.
      
      * add: docstrings to AutoencoderTiny
      
      * add: documentation.
      
      * changes to the conversion script.
      
      * add doc entry.
      
      * settle tests.
      
      * style
      
      * add one slow test.
      
      * fix
      
      * fix 2
      
      * fix 2
      
      * fix: 4
      
      * fix: 5
      
      * finish integration tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * style
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      18fc40c1
    • Sayak Paul's avatar
      [LoRA] Fix SDXL text encoder LoRAs (#4371) · 816ca004
      Sayak Paul authored
      
      
      * temporarily disable text encoder loras.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debbuging.
      
      * modify doc.
      
      * rename tests.
      
      * print slices.
      
      * fix: assertions
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      816ca004
  7. 28 Jul, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] Support SDXL Kohya-style LoRA (#4287) · 4a4cdd6b
      Sayak Paul authored
      
      
      * sdxl lora changes.
      
      * better name replacement.
      
      * better replacement.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove print.
      
      * print state dict keys.
      
      * print
      
      * distingisuih better
      
      * debuggable.
      
      * fxi: tyests
      
      * fix: arg from training script.
      
      * access from class.
      
      * run style
      
      * debug
      
      * save intermediate
      
      * some simplifications for SDXL LoRA
      
      * styling
      
      * unet config is not needed in diffusers format.
      
      * fix: dynamic SGM block mapping for SDXL kohya loras (#4322)
      
      * Use lora compatible layers for linear proj_in/proj_out (#4323)
      
      * improve condition for using the sgm_diffusers mapping
      
      * informative comment.
      
      * load compatible keys and embedding layer maaping.
      
      * Get SDXL 1.0 example lora to load
      
      * simplify
      
      * specif ranks and hidden sizes.
      
      * better handling of k rank and hidden
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * fix: alpha keys
      
      * add check for handling LoRAAttnAddedKVProcessor
      
      * sanity comment
      
      * modifications for text encoder SDXL
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * denugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * unneeded comments.
      
      * unneeded comments.
      
      * kwargs for the other attention processors.
      
      * kwargs for the other attention processors.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * improve
      
      * debugging
      
      * debugging
      
      * more print
      
      * Fix alphas
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * clean up
      
      * clean up.
      
      * debugging
      
      * fix: text
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarBatuhan Taskaya <batuhan@python.org>
      4a4cdd6b
  8. 27 Jul, 2023 1 commit
  9. 25 Jul, 2023 2 commits
  10. 21 Jul, 2023 1 commit
  11. 20 Jul, 2023 1 commit
  12. 19 Jul, 2023 2 commits
  13. 14 Jul, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] add: utility for unloading lora. (#4034) · 692b7a90
      Sayak Paul authored
      * add: test for testing unloading lora.
      
      * add :reason to skipif.
      
      * initial implementation of lora unload().
      
      * apply styling.
      
      * add: doc.
      
      * change checkpoints.
      
      * reinit generator
      
      * finalize slow test.
      
      * add fast test for unloading lora.
      692b7a90
  14. 09 Jul, 2023 1 commit
    • Will Berman's avatar
      Refactor LoRA (#3778) · c2a28c34
      Will Berman authored
      
      
      * refactor to support patching LoRA into T5
      
      instantiate the lora linear layer on the same device as the regular linear layer
      
      get lora rank from state dict
      
      tests
      
      fmt
      
      can create lora layer in float32 even when rest of model is float16
      
      fix loading model hook
      
      remove load_lora_weights_ and T5 dispatching
      
      remove Unet#attn_processors_state_dict
      
      docstrings
      
      * text encoder monkeypatch class method
      
      * fix test
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      c2a28c34
  15. 06 Jul, 2023 1 commit
  16. 03 Jul, 2023 1 commit
  17. 28 Jun, 2023 1 commit
  18. 22 Jun, 2023 1 commit
    • Patrick von Platen's avatar
      Correct bad attn naming (#3797) · 88d26946
      Patrick von Platen authored
      
      
      * relax tolerance slightly
      
      * correct incorrect naming
      
      * correct namingc
      
      * correct more
      
      * Apply suggestions from code review
      
      * Fix more
      
      * Correct more
      
      * correct incorrect naming
      
      * Update src/diffusers/models/controlnet.py
      
      * Correct flax
      
      * Correct renaming
      
      * Correct blocks
      
      * Fix more
      
      * Correct more
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * Fix flax
      
      * mkae style
      
      * rename
      
      * rename
      
      * rename attn head dim to attention_head_dim
      
      * correct flax
      
      * make style
      
      * improve
      
      * Correct more
      
      * make style
      
      * fix more
      
      * mkae style
      
      * Update src/diffusers/models/controlnet_flax.py
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      88d26946
  19. 21 Jun, 2023 1 commit
  20. 16 Jun, 2023 1 commit
  21. 15 Jun, 2023 1 commit
  22. 12 Jun, 2023 1 commit
  23. 06 Jun, 2023 3 commits
    • Patrick von Platen's avatar
      Add draft for lora text encoder scale (#3626) · 74fd735e
      Patrick von Platen authored
      
      
      * Add draft for lora text encoder scale
      
      * Improve naming
      
      * fix: training dreambooth lora script.
      
      * Apply suggestions from code review
      
      * Update examples/dreambooth/train_dreambooth_lora.py
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * add lora mixin when fit
      
      * add lora mixin when fit
      
      * add lora mixin when fit
      
      * fix more
      
      * fix more
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      74fd735e
    • Sayak Paul's avatar
      [LoRA] feat: add lora attention processor for pt 2.0. (#3594) · 8669e831
      Sayak Paul authored
      * feat: add lora attention processor for pt 2.0.
      
      * explicit context manager for SDPA.
      
      * switch to flash attention
      
      * make shapes compatible to work optimally with SDPA.
      
      * fix: circular import problem.
      
      * explicitly specify the flash attention kernel in sdpa
      
      * fall back to efficient attention context manager.
      
      * remove explicit dispatch.
      
      * fix: removed processor.
      
      * fix: remove optional from type annotation.
      
      * feat: make changes regarding LoRAAttnProcessor2_0.
      
      * remove confusing warning.
      
      * formatting.
      
      * relax tolerance for PT 2.0
      
      * fix: loading message.
      
      * remove unnecessary logging.
      
      * add: entry to the docs.
      
      * add: network_alpha argument.
      
      * relax tolerance.
      8669e831
    • Takuma Mori's avatar
      Add function to remove monkey-patch for text encoder LoRA (#3649) · b45204ea
      Takuma Mori authored
      * merge undoable-monkeypatch
      
      * remove TEXT_ENCODER_TARGET_MODULES, refactoring
      
      * move create_lora_weight_file
      b45204ea
  24. 05 Jun, 2023 1 commit
  25. 02 Jun, 2023 1 commit
    • Takuma Mori's avatar
      Support Kohya-ss style LoRA file format (in a limited capacity) (#3437) · 8e552bb4
      Takuma Mori authored
      
      
      * add _convert_kohya_lora_to_diffusers
      
      * make style
      
      * add scaffold
      
      * match result: unet attention only
      
      * fix monkey-patch for text_encoder
      
      * with CLIPAttention
      
      While the terrible images are no longer produced,
      the results do not match those from the hook ver.
      This may be due to not setting the network_alpha value.
      
      * add to support network_alpha
      
      * generate diff image
      
      * fix monkey-patch for text_encoder
      
      * add test_text_encoder_lora_monkey_patch()
      
      * verify that it's okay to release the attn_procs
      
      * fix closure version
      
      * add comment
      
      * Revert "fix monkey-patch for text_encoder"
      
      This reverts commit bb9c61e6faecc1935c9c4319c77065837655d616.
      
      * Fix to reuse utility functions
      
      * make LoRAAttnProcessor targets to self_attn
      
      * fix LoRAAttnProcessor target
      
      * make style
      
      * fix split key
      
      * Update src/diffusers/loaders.py
      
      * remove TEXT_ENCODER_TARGET_MODULES loop
      
      * add print memory usage
      
      * remove test_kohya_loras_scaffold.py
      
      * add: doc on LoRA civitai
      
      * remove print statement and refactor in the doc.
      
      * fix state_dict test for kohya-ss style lora
      
      * Apply suggestions from code review
      Co-authored-by: default avatarTakuma Mori <takuma104@gmail.com>
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e552bb4
  26. 26 May, 2023 1 commit
  27. 23 May, 2023 1 commit
    • Pedro Cuenca's avatar
      Run `torch.compile` tests in separate subprocesses (#3503) · bde2cb5d
      Pedro Cuenca authored
      * Run ControlNet compile test in a separate subprocess
      
      `torch.compile()` spawns several subprocesses and the GPU memory used
      was not reclaimed after the test ran. This approach was taken from
      `transformers`.
      
      * Style
      
      * Prepare a couple more compile tests to run in subprocess.
      
      * Use require_torch_2 decorator.
      
      * Test inpaint_compile in subprocess.
      
      * Run img2img compile test in subprocess.
      
      * Run stable diffusion compile test in subprocess.
      
      * style
      
      * Temporarily trigger on pr to test.
      
      * Revert "Temporarily trigger on pr to test."
      
      This reverts commit 82d76868ddf9cc634a9f14b2b0aef1d5433cd750.
      bde2cb5d
  28. 22 May, 2023 2 commits
    • Birch-san's avatar
      Support for cross-attention bias / mask (#2634) · 64bf5d33
      Birch-san authored
      
      
      * Cross-attention masks
      
      prefer qualified symbol, fix accidental Optional
      
      prefer qualified symbol in AttentionProcessor
      
      prefer qualified symbol in embeddings.py
      
      qualified symbol in transformed_2d
      
      qualify FloatTensor in unet_2d_blocks
      
      move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).
      
      move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.
      
      regenerate modeling_text_unet.py
      
      remove unused import
      
      unet_2d_condition encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      transformer_2d encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      unet_2d_blocks.py: add parameter name comments
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      revert description. bool-to-bias treatment happens in unet_2d_condition only.
      
      comment parameter names
      
      fix copies, style
      
      * encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D
      
      * encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn
      
      * support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.
      
      * fix mistake made during merge conflict resolution
      
      * regenerate versatile_diffusion
      
      * pass time embedding into checkpointed attention invocation
      
      * always assume encoder_attention_mask is a mask (i.e. not a bias).
      
      * style, fix-copies
      
      * add tests for cross-attention masks
      
      * add test for padding of attention mask
      
      * explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens
      
      * support both masks and biases in Transformer2DModel#forward. document behaviour
      
      * fix-copies
      
      * delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).
      
      * review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.
      
      * remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.
      
      * put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.
      
      * fix-copies
      
      * style
      
      * fix-copies
      
      * put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.
      
      * restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.
      
      * make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.
      
      * fix copies
      64bf5d33
    • Patrick von Platen's avatar
      Refactor full determinism (#3485) · 51843fd7
      Patrick von Platen authored
      * up
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix more
      
      * fix more
      
      * Check it
      
      * Remove 16:8
      
      * fix more
      
      * fix more
      
      * fix more
      
      * up
      
      * up
      
      * Test only stable diffusion
      
      * Test only two files
      
      * up
      
      * Try out spinning up processes that can be killed
      
      * up
      
      * Apply suggestions from code review
      
      * up
      
      * up
      51843fd7
  29. 18 May, 2023 1 commit
  30. 12 May, 2023 1 commit