1. 06 Jun, 2023 4 commits
  2. 05 Jun, 2023 1 commit
  3. 02 Jun, 2023 2 commits
    • Lachlan Nicholson's avatar
      Iterate over unique tokens to avoid duplicate replacements for multivector embeddings (#3588) · a6c7b5b6
      Lachlan Nicholson authored
      * iterate over unique tokens to avoid duplicate replacements
      
      * added test for multiple references to multi embedding
      
      * adhere to black formatting
      
      * reorder test post-rebase
      a6c7b5b6
    • Takuma Mori's avatar
      Support Kohya-ss style LoRA file format (in a limited capacity) (#3437) · 8e552bb4
      Takuma Mori authored
      
      
      * add _convert_kohya_lora_to_diffusers
      
      * make style
      
      * add scaffold
      
      * match result: unet attention only
      
      * fix monkey-patch for text_encoder
      
      * with CLIPAttention
      
      While the terrible images are no longer produced,
      the results do not match those from the hook ver.
      This may be due to not setting the network_alpha value.
      
      * add to support network_alpha
      
      * generate diff image
      
      * fix monkey-patch for text_encoder
      
      * add test_text_encoder_lora_monkey_patch()
      
      * verify that it's okay to release the attn_procs
      
      * fix closure version
      
      * add comment
      
      * Revert "fix monkey-patch for text_encoder"
      
      This reverts commit bb9c61e6faecc1935c9c4319c77065837655d616.
      
      * Fix to reuse utility functions
      
      * make LoRAAttnProcessor targets to self_attn
      
      * fix LoRAAttnProcessor target
      
      * make style
      
      * fix split key
      
      * Update src/diffusers/loaders.py
      
      * remove TEXT_ENCODER_TARGET_MODULES loop
      
      * add print memory usage
      
      * remove test_kohya_loras_scaffold.py
      
      * add: doc on LoRA civitai
      
      * remove print statement and refactor in the doc.
      
      * fix state_dict test for kohya-ss style lora
      
      * Apply suggestions from code review
      Co-authored-by: default avatarTakuma Mori <takuma104@gmail.com>
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e552bb4
  4. 30 May, 2023 1 commit
  5. 26 May, 2023 2 commits
  6. 17 May, 2023 1 commit
  7. 11 May, 2023 1 commit
  8. 09 May, 2023 2 commits
    • Steven Liu's avatar
      [docs] Improve safetensors docstring (#3368) · 26832aa5
      Steven Liu authored
      * clarify safetensor docstring
      
      * fix typo
      
      * apply feedback
      26832aa5
    • Will Berman's avatar
      if dreambooth lora (#3360) · a757b2db
      Will Berman authored
      * update IF stage I pipelines
      
      add fixed variance schedulers and lora loading
      
      * added kv lora attn processor
      
      * allow loading into alternative lora attn processor
      
      * make vae optional
      
      * throw away predicted variance
      
      * allow loading into added kv lora layer
      
      * allow load T5
      
      * allow pre compute text embeddings
      
      * set new variance type in schedulers
      
      * fix copies
      
      * refactor all prompt embedding code
      
      class prompts are now included in pre-encoding code
      max tokenizer length is now configurable
      embedding attention mask is now configurable
      
      * fix for when variance type is not defined on scheduler
      
      * do not pre compute validation prompt if not present
      
      * add example test for if lora dreambooth
      
      * add check for train text encoder and pre compute text embeddings
      a757b2db
  9. 08 May, 2023 1 commit
    • pdoane's avatar
      Batched load of textual inversions (#3277) · 3d8b3d7c
      pdoane authored
      
      
      * Batched load of textual inversions
      
      - Only call resize_token_embeddings once per batch as it is the most expensive operation
      - Allow pretrained_model_name_or_path and token to be an optional list
      - Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
      - Add comment that single files (e.g. .pt/.safetensors) are supported
      - Add comment for token parameter
      - Convert token override log message from warning to info
      
      * Update src/diffusers/loaders.py
      
      Check for duplicate tokens
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update condition for None tokens
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3d8b3d7c
  10. 28 Apr, 2023 1 commit
  11. 25 Apr, 2023 2 commits
  12. 20 Apr, 2023 2 commits
  13. 19 Apr, 2023 1 commit
  14. 12 Apr, 2023 2 commits
  15. 31 Mar, 2023 2 commits
  16. 30 Mar, 2023 1 commit
  17. 27 Mar, 2023 1 commit
  18. 16 Mar, 2023 1 commit
  19. 15 Mar, 2023 1 commit
  20. 14 Mar, 2023 1 commit
  21. 04 Mar, 2023 1 commit
  22. 03 Mar, 2023 1 commit
  23. 01 Mar, 2023 1 commit
  24. 27 Jan, 2023 1 commit
  25. 18 Jan, 2023 1 commit