1. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  2. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  3. 29 May, 2024 1 commit
  4. 10 May, 2024 1 commit
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
  5. 26 Mar, 2024 1 commit
  6. 28 Feb, 2024 1 commit
  7. 16 Feb, 2024 1 commit
  8. 08 Feb, 2024 1 commit
  9. 01 Dec, 2023 1 commit
  10. 30 Nov, 2023 1 commit
  11. 14 Nov, 2023 1 commit
  12. 11 Nov, 2023 1 commit