1. 25 Jul, 2023 3 commits
  2. 24 Jul, 2023 1 commit
  3. 21 Jul, 2023 2 commits
  4. 19 Jul, 2023 1 commit
  5. 17 Jul, 2023 1 commit
  6. 14 Jul, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] add: utility for unloading lora. (#4034) · 692b7a90
      Sayak Paul authored
      * add: test for testing unloading lora.
      
      * add :reason to skipif.
      
      * initial implementation of lora unload().
      
      * apply styling.
      
      * add: doc.
      
      * change checkpoints.
      
      * reinit generator
      
      * finalize slow test.
      
      * add fast test for unloading lora.
      692b7a90
  7. 11 Jul, 2023 1 commit
  8. 09 Jul, 2023 1 commit
    • Will Berman's avatar
      Refactor LoRA (#3778) · c2a28c34
      Will Berman authored
      
      
      * refactor to support patching LoRA into T5
      
      instantiate the lora linear layer on the same device as the regular linear layer
      
      get lora rank from state dict
      
      tests
      
      fmt
      
      can create lora layer in float32 even when rest of model is float16
      
      fix loading model hook
      
      remove load_lora_weights_ and T5 dispatching
      
      remove Unet#attn_processors_state_dict
      
      docstrings
      
      * text encoder monkeypatch class method
      
      * fix test
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      c2a28c34
  9. 07 Jul, 2023 1 commit
    • Yorai Levi's avatar
      typo in safetensors (safetenstors) (#3976) · 1fbcc78d
      Yorai Levi authored
      * Update pipeline_utils.py
      
      typo in safetensors (safetenstors)
      
      * Update loaders.py
      
      typo in safetensors (safetenstors)
      
      * Update modeling_utils.py
      
      typo in safetensors (safetenstors)
      1fbcc78d
  10. 06 Jul, 2023 1 commit
  11. 28 Jun, 2023 1 commit
  12. 21 Jun, 2023 1 commit
  13. 08 Jun, 2023 1 commit
  14. 06 Jun, 2023 5 commits
  15. 05 Jun, 2023 1 commit
  16. 02 Jun, 2023 2 commits
    • Lachlan Nicholson's avatar
      Iterate over unique tokens to avoid duplicate replacements for multivector embeddings (#3588) · a6c7b5b6
      Lachlan Nicholson authored
      * iterate over unique tokens to avoid duplicate replacements
      
      * added test for multiple references to multi embedding
      
      * adhere to black formatting
      
      * reorder test post-rebase
      a6c7b5b6
    • Takuma Mori's avatar
      Support Kohya-ss style LoRA file format (in a limited capacity) (#3437) · 8e552bb4
      Takuma Mori authored
      
      
      * add _convert_kohya_lora_to_diffusers
      
      * make style
      
      * add scaffold
      
      * match result: unet attention only
      
      * fix monkey-patch for text_encoder
      
      * with CLIPAttention
      
      While the terrible images are no longer produced,
      the results do not match those from the hook ver.
      This may be due to not setting the network_alpha value.
      
      * add to support network_alpha
      
      * generate diff image
      
      * fix monkey-patch for text_encoder
      
      * add test_text_encoder_lora_monkey_patch()
      
      * verify that it's okay to release the attn_procs
      
      * fix closure version
      
      * add comment
      
      * Revert "fix monkey-patch for text_encoder"
      
      This reverts commit bb9c61e6faecc1935c9c4319c77065837655d616.
      
      * Fix to reuse utility functions
      
      * make LoRAAttnProcessor targets to self_attn
      
      * fix LoRAAttnProcessor target
      
      * make style
      
      * fix split key
      
      * Update src/diffusers/loaders.py
      
      * remove TEXT_ENCODER_TARGET_MODULES loop
      
      * add print memory usage
      
      * remove test_kohya_loras_scaffold.py
      
      * add: doc on LoRA civitai
      
      * remove print statement and refactor in the doc.
      
      * fix state_dict test for kohya-ss style lora
      
      * Apply suggestions from code review
      Co-authored-by: default avatarTakuma Mori <takuma104@gmail.com>
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e552bb4
  17. 30 May, 2023 1 commit
  18. 26 May, 2023 2 commits
  19. 17 May, 2023 1 commit
  20. 11 May, 2023 1 commit
  21. 09 May, 2023 2 commits
    • Steven Liu's avatar
      [docs] Improve safetensors docstring (#3368) · 26832aa5
      Steven Liu authored
      * clarify safetensor docstring
      
      * fix typo
      
      * apply feedback
      26832aa5
    • Will Berman's avatar
      if dreambooth lora (#3360) · a757b2db
      Will Berman authored
      * update IF stage I pipelines
      
      add fixed variance schedulers and lora loading
      
      * added kv lora attn processor
      
      * allow loading into alternative lora attn processor
      
      * make vae optional
      
      * throw away predicted variance
      
      * allow loading into added kv lora layer
      
      * allow load T5
      
      * allow pre compute text embeddings
      
      * set new variance type in schedulers
      
      * fix copies
      
      * refactor all prompt embedding code
      
      class prompts are now included in pre-encoding code
      max tokenizer length is now configurable
      embedding attention mask is now configurable
      
      * fix for when variance type is not defined on scheduler
      
      * do not pre compute validation prompt if not present
      
      * add example test for if lora dreambooth
      
      * add check for train text encoder and pre compute text embeddings
      a757b2db
  22. 08 May, 2023 1 commit
    • pdoane's avatar
      Batched load of textual inversions (#3277) · 3d8b3d7c
      pdoane authored
      
      
      * Batched load of textual inversions
      
      - Only call resize_token_embeddings once per batch as it is the most expensive operation
      - Allow pretrained_model_name_or_path and token to be an optional list
      - Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
      - Add comment that single files (e.g. .pt/.safetensors) are supported
      - Add comment for token parameter
      - Convert token override log message from warning to info
      
      * Update src/diffusers/loaders.py
      
      Check for duplicate tokens
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update condition for None tokens
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3d8b3d7c
  23. 28 Apr, 2023 1 commit
  24. 25 Apr, 2023 2 commits
  25. 20 Apr, 2023 2 commits
  26. 19 Apr, 2023 1 commit
  27. 12 Apr, 2023 2 commits