"vscode:/vscode.git/clone" did not exist on "aba4a5799a37103705c90f990417e6a5e70706d2"
  1. 08 Jul, 2024 1 commit
  2. 03 Jul, 2024 1 commit
  3. 25 Jun, 2024 1 commit
  4. 21 Jun, 2024 1 commit
  5. 13 Jun, 2024 1 commit
  6. 12 Jun, 2024 2 commits
  7. 01 Jun, 2024 1 commit
  8. 10 May, 2024 1 commit
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
  9. 03 May, 2024 1 commit
  10. 22 Apr, 2024 1 commit
  11. 09 Apr, 2024 1 commit
  12. 02 Apr, 2024 2 commits
  13. 18 Mar, 2024 1 commit
    • M. Tolga Cangöz's avatar
      Fix Typos (#7325) · 6a05b274
      M. Tolga Cangöz authored
      * Fix PyTorch's convention for inplace functions
      
      * Fix import structure in __init__.py and update config loading logic in test_config.py
      
      * Update configuration access
      
      * Fix typos
      
      * Trim trailing white spaces
      
      * Fix typo in logger name
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit f65dc4afcb57ceb43d5d06389229d47bafb10d2d.
      
      * Fix typo in step_index property description
      
      * Revert "Update configuration access"
      
      This reverts commit 8d44e870b8c1ad08802e3e904c34baeca1b598f8.
      
      * Revert "Fix import structure in __init__.py and update config loading logic in test_config.py"
      
      This reverts commit 2ad5e8bca25aede3b912da22bd57285b598fe171.
      
      * Fix typos
      
      * Fix typos
      
      * Fix typos
      
      * Fix a typo: tranform -> transform
      6a05b274
  14. 13 Mar, 2024 1 commit
    • Sayak Paul's avatar
      [LoRA] use the PyTorch classes wherever needed and start depcrecation cycles (#7204) · 531e7191
      Sayak Paul authored
      * fix PyTorch classes and start deprecsation cycles.
      
      * remove args crafting for accommodating scale.
      
      * remove scale check in feedforward.
      
      * assert against nn.Linear and not CompatibleLinear.
      
      * remove conv_cls and lineaR_cls.
      
      * remove scale
      
      * 👋
      
       scale.
      
      * fix: unet2dcondition
      
      * fix attention.py
      
      * fix: attention.py again
      
      * fix: unet_2d_blocks.
      
      * fix-copies.
      
      * more fixes.
      
      * fix: resnet.py
      
      * more fixes
      
      * fix i2vgenxl unet.
      
      * depcrecate scale gently.
      
      * fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * quality
      
      * throw warning when scale is passed to the the BasicTransformerBlock class.
      
      * remove scale from signature.
      
      * cross_attention_kwargs, very nice catch by Yiyi
      
      * fix: logger.warn
      
      * make deprecation message clearer.
      
      * address final comments.
      
      * maintain same depcrecation message and also add it to activations.
      
      * address yiyi
      
      * fix copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * more depcrecation
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      531e7191
  15. 28 Feb, 2024 1 commit
  16. 25 Feb, 2024 1 commit
  17. 19 Feb, 2024 1 commit
  18. 18 Feb, 2024 1 commit
  19. 10 Feb, 2024 1 commit
  20. 08 Feb, 2024 1 commit
  21. 31 Jan, 2024 1 commit
  22. 03 Jan, 2024 1 commit
  23. 06 Dec, 2023 1 commit
    • Sayak Paul's avatar
      [feat] allow SDXL pipeline to run with fused QKV projections (#6030) · a2bc2e14
      Sayak Paul authored
      
      
      * debug
      
      * from step
      
      * print
      
      * turn sigma a list
      
      * make str
      
      * init_noise_sigma
      
      * comment
      
      * remove prints
      
      * feat: introduce fused projections
      
      * change to a better name
      
      * no grad
      
      * device.
      
      * device
      
      * dtype
      
      * okay
      
      * print
      
      * more print
      
      * fix: unbind -> split
      
      * fix: qkv >-> k
      
      * enable disable
      
      * apply attention processor within the method
      
      * attn processors
      
      * _enable_fused_qkv_projections
      
      * remove print
      
      * add fused projection to vae
      
      * add todos.
      
      * add: documentation and cleanups.
      
      * add: test for qkv projection fusion.
      
      * relax assertions.
      
      * relax further
      
      * fix: docs
      
      * fix-copies
      
      * correct error message.
      
      * Empty-Commit
      
      * better conditioning on disable_fused_qkv_projections
      
      * check
      
      * check processor
      
      * bfloat16 computation.
      
      * check latent dtype
      
      * style
      
      * remove copy temporarily
      
      * cast latent to bfloat16
      
      * fix: vae -> self.vae
      
      * remove print.
      
      * add _change_to_group_norm_32
      
      * comment out stuff that didn't work
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * reflect patrick's suggestions.
      
      * fix imports
      
      * fix: disable call.
      
      * fix more
      
      * fix device and dtype
      
      * fix conditions.
      
      * fix more
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      a2bc2e14
  24. 01 Dec, 2023 1 commit
  25. 24 Nov, 2023 1 commit
    • Patrick von Platen's avatar
      [@cene555][Kandinsky 3.0] Add Kandinsky 3.0 (#5913) · b978334d
      Patrick von Platen authored
      * finalize
      
      * finalize
      
      * finalize
      
      * add slow test
      
      * add slow test
      
      * add slow test
      
      * Fix more
      
      * add slow test
      
      * fix more
      
      * fix more
      
      * fix more
      
      * fix more
      
      * fix more
      
      * fix more
      
      * fix more
      
      * fix more
      
      * fix more
      
      * Better
      
      * Fix more
      
      * Fix more
      
      * add slow test
      
      * Add auto pipelines
      
      * add slow test
      
      * Add all
      
      * add slow test
      
      * add slow test
      
      * add slow test
      
      * add slow test
      
      * add slow test
      
      * Apply suggestions from code review
      
      * add slow test
      
      * add slow test
      b978334d
  26. 21 Nov, 2023 1 commit
  27. 08 Nov, 2023 2 commits
  28. 07 Nov, 2023 1 commit
  29. 25 Oct, 2023 2 commits
    • Aryan V S's avatar
      Improve typehints and docs in `diffusers/models` (#5391) · 0c9f174d
      Aryan V S authored
      
      
      * improvement: add typehints and docs to src/diffusers/models/attention_processor.py
      
      * improvement: add typehints and docs to src/diffusers/models/vae.py
      
      * improvement: add missing docs in src/diffusers/models/vq_model.py
      
      * improvement: add typehints and docs to src/diffusers/models/transformer_temporal.py
      
      * improvement: add typehints and docs to src/diffusers/models/t5_film_transformer.py
      
      * improvement: add type hints to src/diffusers/models/unet_1d_blocks.py
      
      * improvement: add missing type hints to src/diffusers/models/unet_2d_blocks.py
      
      * fix: CI error (make fix-copies required)
      
      * fix: CI error (make fix-copies required again)
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      0c9f174d
    • AnyISalIn's avatar
      de71fa59
  30. 13 Oct, 2023 1 commit
  31. 11 Oct, 2023 1 commit
  32. 27 Sep, 2023 1 commit
  33. 18 Sep, 2023 1 commit
    • Ruoxi's avatar
      Implement `CustomDiffusionAttnProcessor2_0`. (#4604) · 16b9a57d
      Ruoxi authored
      * Implement `CustomDiffusionAttnProcessor2_0`
      
      * Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1)
      
      * Update attnprocessor.md
      
      * Update attention_processor.py
      
      * Interops for `CustomDiffusionAttnProcessor2_0`.
      
      * Formatted `attention_processor.py`.
      
      * Formatted doc-string in `attention_processor.py`
      
      * Conditional CustomDiffusion2_0 for training example.
      
      * Remove unnecessary reference impl in comments.
      
      * Fix `save_attn_procs`.
      16b9a57d
  34. 16 Sep, 2023 1 commit
  35. 14 Sep, 2023 1 commit
  36. 11 Sep, 2023 1 commit
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016