1. 03 May, 2024 1 commit
  2. 25 Feb, 2024 1 commit
  3. 17 Feb, 2024 1 commit
  4. 08 Feb, 2024 1 commit
  5. 06 Dec, 2023 1 commit
    • Sayak Paul's avatar
      [feat] allow SDXL pipeline to run with fused QKV projections (#6030) · a2bc2e14
      Sayak Paul authored
      
      
      * debug
      
      * from step
      
      * print
      
      * turn sigma a list
      
      * make str
      
      * init_noise_sigma
      
      * comment
      
      * remove prints
      
      * feat: introduce fused projections
      
      * change to a better name
      
      * no grad
      
      * device.
      
      * device
      
      * dtype
      
      * okay
      
      * print
      
      * more print
      
      * fix: unbind -> split
      
      * fix: qkv >-> k
      
      * enable disable
      
      * apply attention processor within the method
      
      * attn processors
      
      * _enable_fused_qkv_projections
      
      * remove print
      
      * add fused projection to vae
      
      * add todos.
      
      * add: documentation and cleanups.
      
      * add: test for qkv projection fusion.
      
      * relax assertions.
      
      * relax further
      
      * fix: docs
      
      * fix-copies
      
      * correct error message.
      
      * Empty-Commit
      
      * better conditioning on disable_fused_qkv_projections
      
      * check
      
      * check processor
      
      * bfloat16 computation.
      
      * check latent dtype
      
      * style
      
      * remove copy temporarily
      
      * cast latent to bfloat16
      
      * fix: vae -> self.vae
      
      * remove print.
      
      * add _change_to_group_norm_32
      
      * comment out stuff that didn't work
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * reflect patrick's suggestions.
      
      * fix imports
      
      * fix: disable call.
      
      * fix more
      
      * fix device and dtype
      
      * fix conditions.
      
      * fix more
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      a2bc2e14
  6. 13 Nov, 2023 1 commit
  7. 18 Sep, 2023 1 commit
    • Ruoxi's avatar
      Implement `CustomDiffusionAttnProcessor2_0`. (#4604) · 16b9a57d
      Ruoxi authored
      * Implement `CustomDiffusionAttnProcessor2_0`
      
      * Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1)
      
      * Update attnprocessor.md
      
      * Update attention_processor.py
      
      * Interops for `CustomDiffusionAttnProcessor2_0`.
      
      * Formatted `attention_processor.py`.
      
      * Formatted doc-string in `attention_processor.py`
      
      * Conditional CustomDiffusion2_0 for training example.
      
      * Remove unnecessary reference impl in comments.
      
      * Fix `save_attn_procs`.
      16b9a57d
  8. 26 Jul, 2023 1 commit
  9. 06 Jun, 2023 1 commit
    • Sayak Paul's avatar
      [LoRA] feat: add lora attention processor for pt 2.0. (#3594) · 8669e831
      Sayak Paul authored
      * feat: add lora attention processor for pt 2.0.
      
      * explicit context manager for SDPA.
      
      * switch to flash attention
      
      * make shapes compatible to work optimally with SDPA.
      
      * fix: circular import problem.
      
      * explicitly specify the flash attention kernel in sdpa
      
      * fall back to efficient attention context manager.
      
      * remove explicit dispatch.
      
      * fix: removed processor.
      
      * fix: remove optional from type annotation.
      
      * feat: make changes regarding LoRAAttnProcessor2_0.
      
      * remove confusing warning.
      
      * formatting.
      
      * relax tolerance for PT 2.0
      
      * fix: loading message.
      
      * remove unnecessary logging.
      
      * add: entry to the docs.
      
      * add: network_alpha argument.
      
      * relax tolerance.
      8669e831
  10. 26 May, 2023 1 commit
    • Steven Liu's avatar
      [docs] Add AttnProcessor to docs (#3474) · 7948db81
      Steven Liu authored
      * add attnprocessor to docs
      
      * fix path to class
      
      * create separate page for attnprocessors
      
      * fix path
      
      * fix path for real
      
      * fill in docstrings
      
      * apply feedback
      
      * apply feedback
      7948db81