1. 24 Apr, 2025 1 commit
  2. 10 Jan, 2025 1 commit
  3. 08 Jan, 2025 1 commit
  4. 09 Oct, 2024 1 commit
  5. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  6. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  7. 29 May, 2024 1 commit
  8. 10 May, 2024 1 commit
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
  9. 11 Apr, 2024 1 commit
  10. 05 Apr, 2024 1 commit
  11. 18 Mar, 2024 1 commit
  12. 13 Mar, 2024 1 commit
  13. 20 Nov, 2023 1 commit
  14. 07 Nov, 2023 1 commit
  15. 18 Oct, 2023 1 commit
  16. 25 Sep, 2023 2 commits
  17. 12 Sep, 2023 1 commit
  18. 11 Sep, 2023 2 commits
    • Patrick von Platen's avatar
      Refactor model offload (#4514) · 93579650
      Patrick von Platen authored
      
      
      * [Draft] Refactor model offload
      
      * [Draft] Refactor model offload
      
      * Apply suggestions from code review
      
      * cpu offlaod updates
      
      * remove model cpu offload from individual pipelines
      
      * add hook to offload models to cpu
      
      * clean up
      
      * model offload
      
      * add model cpu offload string
      
      * make style
      
      * clean up
      
      * fixes for offload issues
      
      * fix tests issues
      
      * resolve merge conflicts
      
      * update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make style
      
      * Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      93579650
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016
  19. 26 Jul, 2023 1 commit
  20. 18 Jul, 2023 1 commit
  21. 31 May, 2023 1 commit
  22. 26 May, 2023 1 commit
  23. 17 May, 2023 1 commit
  24. 09 May, 2023 1 commit
    • Will Berman's avatar
      if dreambooth lora (#3360) · a757b2db
      Will Berman authored
      * update IF stage I pipelines
      
      add fixed variance schedulers and lora loading
      
      * added kv lora attn processor
      
      * allow loading into alternative lora attn processor
      
      * make vae optional
      
      * throw away predicted variance
      
      * allow loading into added kv lora layer
      
      * allow load T5
      
      * allow pre compute text embeddings
      
      * set new variance type in schedulers
      
      * fix copies
      
      * refactor all prompt embedding code
      
      class prompts are now included in pre-encoding code
      max tokenizer length is now configurable
      embedding attention mask is now configurable
      
      * fix for when variance type is not defined on scheduler
      
      * do not pre compute validation prompt if not present
      
      * add example test for if lora dreambooth
      
      * add check for train text encoder and pre compute text embeddings
      a757b2db
  25. 05 May, 2023 1 commit
  26. 03 May, 2023 1 commit
  27. 02 May, 2023 1 commit
  28. 01 May, 2023 1 commit
    • Patrick von Platen's avatar
      Torch compile graph fix (#3286) · 0e82fb19
      Patrick von Platen authored
      * fix more
      
      * Fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * make sure torch compile
      
      * Clean
      
      * fix test
      0e82fb19
  29. 27 Apr, 2023 1 commit
  30. 25 Apr, 2023 1 commit
    • Patrick von Platen's avatar
      add model (#3230) · e51f19ae
      Patrick von Platen authored
      
      
      * add
      
      * clean
      
      * up
      
      * clean up more
      
      * fix more tests
      
      * Improve docs further
      
      * improve
      
      * more fixes docs
      
      * Improve docs more
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      * fix
      
      * up
      
      * update doc links
      
      * make fix-copies
      
      * add safety checker and watermarker to stage 3 doc page code snippets
      
      * speed optimizations docs
      
      * memory optimization docs
      
      * make style
      
      * add watermarking snippets to doc string examples
      
      * make style
      
      * use pt_to_pil helper functions in doc strings
      
      * skip mps tests
      
      * Improve safety
      
      * make style
      
      * new logic
      
      * fix
      
      * fix bad onnx design
      
      * make new stable diffusion upscale pipeline model arguments optional
      
      * define has_nsfw_concept when non-pil output type
      
      * lowercase linked to notebook name
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      e51f19ae