1. 01 Sep, 2023 1 commit
    • Dhruv Nair's avatar
      Test Cleanup Precision issues (#4812) · 189e9f01
      Dhruv Nair authored
      
      
      * proposal for flaky tests
      
      * more precision fixes
      
      * move more tests to use cosine distance
      
      * more test fixes
      
      * clean up
      
      * use default attn
      
      * clean up
      
      * update expected value
      
      * make style
      
      * make style
      
      * Apply suggestions from code review
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
      
      * make style
      
      * fix failing tests
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      189e9f01
  2. 30 Aug, 2023 1 commit
  3. 29 Aug, 2023 1 commit
    • Patrick von Platen's avatar
      Fuse loras (#4473) · c583f3b4
      Patrick von Platen authored
      
      
      * Fuse loras
      
      * initial implementation.
      
      * add slow test one.
      
      * styling
      
      * add: test for checking efficiency
      
      * print
      
      * position
      
      * place model offload correctly
      
      * style
      
      * style.
      
      * unfuse test.
      
      * final checks
      
      * remove warning test
      
      * remove warnings altogether
      
      * debugging
      
      * tighten up tests.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * denugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debuging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * suit up the generator initialization a bit.
      
      * remove print
      
      * update assertion.
      
      * debugging
      
      * remove print.
      
      * fix: assertions.
      
      * style
      
      * can generator be a problem?
      
      * generator
      
      * correct tests.
      
      * support text encoder lora fusion.
      
      * tighten up tests.
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      c583f3b4
  4. 28 Aug, 2023 1 commit
    • Patrick von Platen's avatar
      [LoRA Attn Processors] Refactor LoRA Attn Processors (#4765) · 766aa50f
      Patrick von Platen authored
      * [LoRA Attn] Refactor LoRA attn
      
      * correct for network alphas
      
      * fix more
      
      * fix more tests
      
      * fix more tests
      
      * Move below
      
      * Finish
      
      * better version
      
      * correct serialization format
      
      * fix
      
      * fix more
      
      * fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
      
      * deprecation
      
      * relax atol for slow test slighly
      
      * Finish tests
      
      * make style
      
      * make style
      766aa50f
  5. 26 Aug, 2023 1 commit
  6. 24 Aug, 2023 1 commit
  7. 23 Aug, 2023 1 commit
    • Ollin Boer Bohan's avatar
      Fix AutoencoderTiny encoder scaling convention (#4682) · 052bf328
      Ollin Boer Bohan authored
      * Fix AutoencoderTiny encoder scaling convention
      
        * Add [-1, 1] -> [0, 1] rescaling to EncoderTiny
      
        * Move [0, 1] -> [-1, 1] rescaling from AutoencoderTiny.decode to DecoderTiny
          (i.e. immediately after the final conv, as early as possible)
      
        * Fix missing [0, 255] -> [0, 1] rescaling in AutoencoderTiny.forward
      
        * Update AutoencoderTinyIntegrationTests to protect against scaling issues.
          The new test constructs a simple image, round-trips it through AutoencoderTiny,
          and confirms the decoded result is approximately equal to the source image.
          This test checks behavior with and without tiling enabled.
          This test will fail if new AutoencoderTiny scaling issues are introduced.
      
        * Context: Raw TAESD weights expect images in [0, 1], but diffusers'
          convention represents images with zero-centered values in [-1, 1],
          so AutoencoderTiny needs to scale / unscale images at the start of
          encoding and at the end of decoding in order to work with diffusers.
      
      * Re-add existing AutoencoderTiny test, update golden values
      
      * Add comments to AutoencoderTiny.forward
      052bf328
  8. 18 Aug, 2023 1 commit
  9. 17 Aug, 2023 4 commits
  10. 15 Aug, 2023 1 commit
  11. 11 Aug, 2023 1 commit
  12. 04 Aug, 2023 2 commits
  13. 02 Aug, 2023 2 commits
    • Sayak Paul's avatar
      [Feat] add tiny Autoencoder for (almost) instant decoding (#4384) · 18fc40c1
      Sayak Paul authored
      
      
      * add: model implementation of tiny autoencoder.
      
      * add: inits.
      
      * push the latest devs.
      
      * add: conversion script and finish.
      
      * add: scaling factor args.
      
      * debugging
      
      * fix denormalization.
      
      * fix: positional argument.
      
      * handle use_torch_2_0_or_xformers.
      
      * handle post_quant_conv
      
      * handle dtype
      
      * fix: sdxl image processor for tiny ae.
      
      * fix: sdxl image processor for tiny ae.
      
      * unify upcasting logic.
      
      * copied from madness.
      
      * remove trailing whitespace.
      
      * set is_tiny_vae = False
      
      * address PR comments.
      
      * change to AutoencoderTiny
      
      * make act_fn an str throughout
      
      * fix: apply_forward_hook decorator call
      
      * get rid of the special is_tiny_vae flag.
      
      * directly scale the output.
      
      * fix dummies?
      
      * fix: act_fn.
      
      * get rid of the Clamp() layer.
      
      * bring back copied from.
      
      * movement of the blocks to appropriate modules.
      
      * add: docstrings to AutoencoderTiny
      
      * add: documentation.
      
      * changes to the conversion script.
      
      * add doc entry.
      
      * settle tests.
      
      * style
      
      * add one slow test.
      
      * fix
      
      * fix 2
      
      * fix 2
      
      * fix: 4
      
      * fix: 5
      
      * finish integration tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * style
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      18fc40c1
    • Sayak Paul's avatar
      [LoRA] Fix SDXL text encoder LoRAs (#4371) · 816ca004
      Sayak Paul authored
      
      
      * temporarily disable text encoder loras.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debbuging.
      
      * modify doc.
      
      * rename tests.
      
      * print slices.
      
      * fix: assertions
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      816ca004
  14. 28 Jul, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] Support SDXL Kohya-style LoRA (#4287) · 4a4cdd6b
      Sayak Paul authored
      
      
      * sdxl lora changes.
      
      * better name replacement.
      
      * better replacement.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove print.
      
      * print state dict keys.
      
      * print
      
      * distingisuih better
      
      * debuggable.
      
      * fxi: tyests
      
      * fix: arg from training script.
      
      * access from class.
      
      * run style
      
      * debug
      
      * save intermediate
      
      * some simplifications for SDXL LoRA
      
      * styling
      
      * unet config is not needed in diffusers format.
      
      * fix: dynamic SGM block mapping for SDXL kohya loras (#4322)
      
      * Use lora compatible layers for linear proj_in/proj_out (#4323)
      
      * improve condition for using the sgm_diffusers mapping
      
      * informative comment.
      
      * load compatible keys and embedding layer maaping.
      
      * Get SDXL 1.0 example lora to load
      
      * simplify
      
      * specif ranks and hidden sizes.
      
      * better handling of k rank and hidden
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * fix: alpha keys
      
      * add check for handling LoRAAttnAddedKVProcessor
      
      * sanity comment
      
      * modifications for text encoder SDXL
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * denugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * unneeded comments.
      
      * unneeded comments.
      
      * kwargs for the other attention processors.
      
      * kwargs for the other attention processors.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * improve
      
      * debugging
      
      * debugging
      
      * more print
      
      * Fix alphas
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * clean up
      
      * clean up.
      
      * debugging
      
      * fix: text
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarBatuhan Taskaya <batuhan@python.org>
      4a4cdd6b
  15. 27 Jul, 2023 1 commit
  16. 25 Jul, 2023 2 commits
  17. 21 Jul, 2023 1 commit
  18. 20 Jul, 2023 1 commit
  19. 19 Jul, 2023 2 commits
  20. 14 Jul, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] add: utility for unloading lora. (#4034) · 692b7a90
      Sayak Paul authored
      * add: test for testing unloading lora.
      
      * add :reason to skipif.
      
      * initial implementation of lora unload().
      
      * apply styling.
      
      * add: doc.
      
      * change checkpoints.
      
      * reinit generator
      
      * finalize slow test.
      
      * add fast test for unloading lora.
      692b7a90
  21. 09 Jul, 2023 1 commit
    • Will Berman's avatar
      Refactor LoRA (#3778) · c2a28c34
      Will Berman authored
      
      
      * refactor to support patching LoRA into T5
      
      instantiate the lora linear layer on the same device as the regular linear layer
      
      get lora rank from state dict
      
      tests
      
      fmt
      
      can create lora layer in float32 even when rest of model is float16
      
      fix loading model hook
      
      remove load_lora_weights_ and T5 dispatching
      
      remove Unet#attn_processors_state_dict
      
      docstrings
      
      * text encoder monkeypatch class method
      
      * fix test
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      c2a28c34
  22. 06 Jul, 2023 1 commit
  23. 03 Jul, 2023 1 commit
  24. 28 Jun, 2023 1 commit
  25. 22 Jun, 2023 1 commit
    • Patrick von Platen's avatar
      Correct bad attn naming (#3797) · 88d26946
      Patrick von Platen authored
      
      
      * relax tolerance slightly
      
      * correct incorrect naming
      
      * correct namingc
      
      * correct more
      
      * Apply suggestions from code review
      
      * Fix more
      
      * Correct more
      
      * correct incorrect naming
      
      * Update src/diffusers/models/controlnet.py
      
      * Correct flax
      
      * Correct renaming
      
      * Correct blocks
      
      * Fix more
      
      * Correct more
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * Fix flax
      
      * mkae style
      
      * rename
      
      * rename
      
      * rename attn head dim to attention_head_dim
      
      * correct flax
      
      * make style
      
      * improve
      
      * Correct more
      
      * make style
      
      * fix more
      
      * mkae style
      
      * Update src/diffusers/models/controlnet_flax.py
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      88d26946
  26. 21 Jun, 2023 1 commit
  27. 16 Jun, 2023 1 commit
  28. 15 Jun, 2023 1 commit
  29. 12 Jun, 2023 1 commit
  30. 06 Jun, 2023 3 commits
    • Patrick von Platen's avatar
      Add draft for lora text encoder scale (#3626) · 74fd735e
      Patrick von Platen authored
      
      
      * Add draft for lora text encoder scale
      
      * Improve naming
      
      * fix: training dreambooth lora script.
      
      * Apply suggestions from code review
      
      * Update examples/dreambooth/train_dreambooth_lora.py
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * add lora mixin when fit
      
      * add lora mixin when fit
      
      * add lora mixin when fit
      
      * fix more
      
      * fix more
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      74fd735e
    • Sayak Paul's avatar
      [LoRA] feat: add lora attention processor for pt 2.0. (#3594) · 8669e831
      Sayak Paul authored
      * feat: add lora attention processor for pt 2.0.
      
      * explicit context manager for SDPA.
      
      * switch to flash attention
      
      * make shapes compatible to work optimally with SDPA.
      
      * fix: circular import problem.
      
      * explicitly specify the flash attention kernel in sdpa
      
      * fall back to efficient attention context manager.
      
      * remove explicit dispatch.
      
      * fix: removed processor.
      
      * fix: remove optional from type annotation.
      
      * feat: make changes regarding LoRAAttnProcessor2_0.
      
      * remove confusing warning.
      
      * formatting.
      
      * relax tolerance for PT 2.0
      
      * fix: loading message.
      
      * remove unnecessary logging.
      
      * add: entry to the docs.
      
      * add: network_alpha argument.
      
      * relax tolerance.
      8669e831
    • Takuma Mori's avatar
      Add function to remove monkey-patch for text encoder LoRA (#3649) · b45204ea
      Takuma Mori authored
      * merge undoable-monkeypatch
      
      * remove TEXT_ENCODER_TARGET_MODULES, refactoring
      
      * move create_lora_weight_file
      b45204ea
  31. 05 Jun, 2023 1 commit