1. 13 Dec, 2022 1 commit
  2. 12 Dec, 2022 1 commit
  3. 05 Dec, 2022 2 commits
  4. 02 Dec, 2022 2 commits
  5. 29 Nov, 2022 1 commit
    • Ilmari Heikkinen's avatar
      StableDiffusion: Decode latents separately to run larger batches (#1150) · c28d3c82
      Ilmari Heikkinen authored
      
      
      * StableDiffusion: Decode latents separately to run larger batches
      
      * Move VAE sliced decode under enable_vae_sliced_decode and vae.enable_sliced_decode
      
      * Rename sliced_decode to slicing
      
      * fix whitespace
      
      * fix quality check and repository consistency
      
      * VAE slicing tests and documentation
      
      * API doc hooks for VAE slicing
      
      * reformat vae slicing tests
      
      * Skip VAE slicing for one-image batches
      
      * Documentation tweaks for VAE slicing
      Co-authored-by: default avatarIlmari Heikkinen <ilmari@fhtr.org>
      c28d3c82
  6. 28 Nov, 2022 2 commits
  7. 25 Nov, 2022 2 commits
  8. 24 Nov, 2022 5 commits
  9. 22 Nov, 2022 1 commit
  10. 17 Nov, 2022 1 commit
  11. 15 Nov, 2022 1 commit
    • Patrick von Platen's avatar
      Add AltDiffusion (#1299) · 8a730645
      Patrick von Platen authored
      
      
      * add conversion script for vae
      
      * up
      
      * up
      
      * some fixes
      
      * add text model
      
      * use the correct config
      
      * add docs
      
      * move model in it's own file
      
      * move model in its own file
      
      * pass attenion mask to text encoder
      
      * pass attn mask to uncond inputs
      
      * quality
      
      * fix image2image
      
      * add imag2image in init
      
      * fix import
      
      * fix one more import
      
      * fix import, dummy objetcs
      
      * fix copied from
      
      * up
      
      * finish
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      8a730645
  12. 13 Nov, 2022 2 commits
  13. 09 Nov, 2022 3 commits
  14. 08 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      MPS schedulers: don't use float64 (#1169) · 813744e5
      Pedro Cuenca authored
      * Schedulers: don't use float64 on mps
      
      * Test set_timesteps() on device (float schedulers).
      
      * SD pipeline: use device in set_timesteps.
      
      * SD in-painting pipeline: use device in set_timesteps.
      
      * Tests: fix mps crashes.
      
      * Skip test_load_pipeline_from_git on mps.
      
      Not compatible with float16.
      
      * Use device.type instead of str in Euler schedulers.
      813744e5
  15. 07 Nov, 2022 1 commit
  16. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  17. 03 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      Continuation of #1035 (#1120) · 269109db
      Pedro Cuenca authored
      
      
      * remove batch size from repeat
      
      * repeat empty string if uncond_tokens is none
      
      * fix inpaint pipes
      
      * return back whitespace to pass code quality
      
      * Apply suggestions from code review
      
      * Fix typos.
      Co-authored-by: default avatarHad <had-95@yandex.ru>
      269109db
  18. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  19. 31 Oct, 2022 3 commits
  20. 28 Oct, 2022 1 commit
  21. 27 Oct, 2022 2 commits
  22. 26 Oct, 2022 1 commit
    • Pi Esposito's avatar
      minimal stable diffusion GPU memory usage with accelerate hooks (#850) · b2e2d141
      Pi Esposito authored
      * add method to enable cuda with minimal gpu usage to stable diffusion
      
      * add test to minimal cuda memory usage
      
      * ensure all models but unet are onn torch.float32
      
      * move to cpu_offload along with minor internal changes to make it work
      
      * make it test against accelerate master branch
      
      * coming back, its official: I don't know how to make it test againt the master branch from accelerate
      
      * make it install accelerate from master on tests
      
      * go back to accelerate>=0.11
      
      * undo prettier formatting on yml files
      
      * undo prettier formatting on yml files againn
      b2e2d141
  23. 25 Oct, 2022 1 commit
  24. 24 Oct, 2022 1 commit
  25. 13 Oct, 2022 2 commits