1. 07 Feb, 2023 1 commit
  2. 06 Feb, 2023 1 commit
  3. 30 Dec, 2022 1 commit
  4. 08 Dec, 2022 1 commit
  5. 02 Dec, 2022 2 commits
  6. 22 Nov, 2022 1 commit
  7. 21 Nov, 2022 1 commit
  8. 16 Nov, 2022 1 commit
  9. 15 Nov, 2022 1 commit
    • Patrick von Platen's avatar
      Add AltDiffusion (#1299) · 8a730645
      Patrick von Platen authored
      
      
      * add conversion script for vae
      
      * up
      
      * up
      
      * some fixes
      
      * add text model
      
      * use the correct config
      
      * add docs
      
      * move model in it's own file
      
      * move model in its own file
      
      * pass attenion mask to text encoder
      
      * pass attn mask to uncond inputs
      
      * quality
      
      * fix image2image
      
      * add imag2image in init
      
      * fix import
      
      * fix one more import
      
      * fix import, dummy objetcs
      
      * fix copied from
      
      * up
      
      * finish
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      8a730645
  10. 13 Nov, 2022 2 commits
  11. 09 Nov, 2022 3 commits
  12. 08 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      MPS schedulers: don't use float64 (#1169) · 813744e5
      Pedro Cuenca authored
      * Schedulers: don't use float64 on mps
      
      * Test set_timesteps() on device (float schedulers).
      
      * SD pipeline: use device in set_timesteps.
      
      * SD in-painting pipeline: use device in set_timesteps.
      
      * Tests: fix mps crashes.
      
      * Skip test_load_pipeline_from_git on mps.
      
      Not compatible with float16.
      
      * Use device.type instead of str in Euler schedulers.
      813744e5
  13. 07 Nov, 2022 1 commit
  14. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  15. 03 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      Continuation of #1035 (#1120) · 269109db
      Pedro Cuenca authored
      
      
      * remove batch size from repeat
      
      * repeat empty string if uncond_tokens is none
      
      * fix inpaint pipes
      
      * return back whitespace to pass code quality
      
      * Apply suggestions from code review
      
      * Fix typos.
      Co-authored-by: default avatarHad <had-95@yandex.ru>
      269109db
  16. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  17. 31 Oct, 2022 3 commits
  18. 28 Oct, 2022 1 commit
  19. 27 Oct, 2022 2 commits
  20. 26 Oct, 2022 1 commit
    • Pi Esposito's avatar
      minimal stable diffusion GPU memory usage with accelerate hooks (#850) · b2e2d141
      Pi Esposito authored
      * add method to enable cuda with minimal gpu usage to stable diffusion
      
      * add test to minimal cuda memory usage
      
      * ensure all models but unet are onn torch.float32
      
      * move to cpu_offload along with minor internal changes to make it work
      
      * make it test against accelerate master branch
      
      * coming back, its official: I don't know how to make it test againt the master branch from accelerate
      
      * make it install accelerate from master on tests
      
      * go back to accelerate>=0.11
      
      * undo prettier formatting on yml files
      
      * undo prettier formatting on yml files againn
      b2e2d141
  21. 25 Oct, 2022 1 commit
  22. 24 Oct, 2022 1 commit
  23. 13 Oct, 2022 3 commits
  24. 11 Oct, 2022 2 commits
  25. 06 Oct, 2022 1 commit
    • Suraj Patil's avatar
      allow multiple generations per prompt (#741) · c119dc4c
      Suraj Patil authored
      * compute text embeds per prompt
      
      * don't repeat uncond prompts
      
      * repeat separatly
      
      * update image2image
      
      * fix repeat uncond embeds
      
      * adapt inpaint pipeline
      
      * ifx uncond tokens in img2img
      
      * add tests and fix ucond embeds in im2img and inpaint pipe
      c119dc4c
  26. 05 Oct, 2022 3 commits
  27. 04 Oct, 2022 1 commit
  28. 03 Oct, 2022 1 commit