1. 28 Feb, 2024 1 commit
  2. 02 Oct, 2023 1 commit
  3. 11 Apr, 2023 1 commit
    • Patrick von Platen's avatar
      Fix config prints and save, load of pipelines (#2849) · 8b451eb6
      Patrick von Platen authored
      * [Config] Fix config prints and save, load
      
      * Only use potential nn.Modules for dtype and device
      
      * Correct vae image processor
      
      * make sure in_channels is not accessed directly
      
      * make sure in channels is only accessed via config
      
      * Make sure schedulers only access config attributes
      
      * Make sure to access config in SAG
      
      * Fix vae processor and make style
      
      * add tests
      
      * uP
      
      * make style
      
      * Fix more naming issues
      
      * Final fix with vae config
      
      * change more
      8b451eb6
  4. 23 Mar, 2023 1 commit
  5. 14 Feb, 2023 1 commit
  6. 07 Feb, 2023 1 commit
  7. 06 Feb, 2023 1 commit
  8. 30 Dec, 2022 1 commit
  9. 22 Nov, 2022 1 commit
  10. 07 Nov, 2022 2 commits
  11. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  12. 03 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      Continuation of #1035 (#1120) · 269109db
      Pedro Cuenca authored
      
      
      * remove batch size from repeat
      
      * repeat empty string if uncond_tokens is none
      
      * fix inpaint pipes
      
      * return back whitespace to pass code quality
      
      * Apply suggestions from code review
      
      * Fix typos.
      Co-authored-by: default avatarHad <had-95@yandex.ru>
      269109db
  13. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  14. 31 Oct, 2022 3 commits
  15. 28 Oct, 2022 1 commit
  16. 27 Oct, 2022 2 commits
  17. 26 Oct, 2022 1 commit
    • Pi Esposito's avatar
      minimal stable diffusion GPU memory usage with accelerate hooks (#850) · b2e2d141
      Pi Esposito authored
      * add method to enable cuda with minimal gpu usage to stable diffusion
      
      * add test to minimal cuda memory usage
      
      * ensure all models but unet are onn torch.float32
      
      * move to cpu_offload along with minor internal changes to make it work
      
      * make it test against accelerate master branch
      
      * coming back, its official: I don't know how to make it test againt the master branch from accelerate
      
      * make it install accelerate from master on tests
      
      * go back to accelerate>=0.11
      
      * undo prettier formatting on yml files
      
      * undo prettier formatting on yml files againn
      b2e2d141
  18. 25 Oct, 2022 1 commit
  19. 24 Oct, 2022 1 commit
  20. 13 Oct, 2022 3 commits
  21. 11 Oct, 2022 2 commits
  22. 06 Oct, 2022 1 commit
    • Suraj Patil's avatar
      allow multiple generations per prompt (#741) · c119dc4c
      Suraj Patil authored
      * compute text embeds per prompt
      
      * don't repeat uncond prompts
      
      * repeat separatly
      
      * update image2image
      
      * fix repeat uncond embeds
      
      * adapt inpaint pipeline
      
      * ifx uncond tokens in img2img
      
      * add tests and fix ucond embeds in im2img and inpaint pipe
      c119dc4c
  23. 05 Oct, 2022 3 commits
  24. 04 Oct, 2022 1 commit
  25. 03 Oct, 2022 2 commits
  26. 02 Oct, 2022 1 commit
  27. 30 Sep, 2022 1 commit
    • Nouamane Tazi's avatar
      Optimize Stable Diffusion (#371) · 9ebaea54
      Nouamane Tazi authored
      * initial commit
      
      * make UNet stream capturable
      
      * try to fix noise_pred value
      
      * remove cuda graph and keep NB
      
      * non blocking unet with PNDMScheduler
      
      * make timesteps np arrays for pndm scheduler
      because lists don't get formatted to tensors in `self.set_format`
      
      * make max async in pndm
      
      * use channel last format in unet
      
      * avoid moving timesteps device in each unet call
      
      * avoid memcpy op in `get_timestep_embedding`
      
      * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`
      
      * update TODO
      
      * replace `channels_last` kwarg with `memory_format` for more generality
      
      * revert the channels_last changes to leave it for another PR
      
      * remove non_blocking when moving input ids to device
      
      * remove blocking from all .to() operations at beginning of pipeline
      
      * fix merging
      
      * fix merging
      
      * model can run in other precisions without autocast
      
      * attn refactoring
      
      * Revert "attn refactoring"
      
      This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb.
      
      * remove restriction to run conv_norm in fp32
      
      * use `baddbmm` instead of `matmul`for better in attention for better perf
      
      * removing all reshapes to test perf
      
      * Revert "removing all reshapes to test perf"
      
      This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd.
      
      * add shapes comments
      
      * hardcore whats needed for jitting
      
      * Revert "hardcore whats needed for jitting"
      
      This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a.
      
      * Revert "remove restriction to run conv_norm in fp32"
      
      This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9.
      
      * revert using baddmm in attention's forward
      
      * cleanup comment
      
      * remove restriction to run conv_norm in fp32. no quality loss was noticed
      
      This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213.
      
      * add more optimizations techniques to docs
      
      * Revert "add shapes comments"
      
      This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4.
      
      * apply suggestions
      
      * make quality
      
      * apply suggestions
      
      * styling
      
      * `scheduler.timesteps` are now arrays so we dont need .to()
      
      * remove useless .type()
      
      * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`
      
      * move scheduler timestamps to correct device if tensors
      
      * add device to `set_timesteps` in LMSD scheduler
      
      * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it
      
      * quick fix
      
      * styling
      
      * remove kwargs from schedulers `set_timesteps`
      
      * revert to using max in K-LMS inpaint pipeline test
      
      * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"
      
      This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899.
      
      * move timesteps to correct device before loop in SD pipeline
      
      * apply previous fix to other SD pipelines
      
      * UNet now accepts tensor timesteps even on wrong device, to avoid errors
      - it shouldnt affect performance if timesteps are alrdy on correct device
      - it does slow down performance if they're on the wrong device
      
      * fix pipeline when timesteps are arrays with strides
      9ebaea54
  28. 27 Sep, 2022 3 commits
    • Kashif Rasul's avatar
      [Pytorch] Pytorch only schedulers (#534) · bd8df2da
      Kashif Rasul authored
      
      
      * pytorch only schedulers
      
      * fix style
      
      * remove match_shape
      
      * pytorch only ddpm
      
      * remove SchedulerMixin
      
      * remove numpy from karras_ve
      
      * fix types
      
      * remove numpy from lms_discrete
      
      * remove numpy from pndm
      
      * fix typo
      
      * remove mixin and numpy from sde_vp and ve
      
      * remove remaining tensor_format
      
      * fix style
      
      * sigmas has to be torch tensor
      
      * removed set_format in readme
      
      * remove set format from docs
      
      * remove set_format from pipelines
      
      * update tests
      
      * fix typo
      
      * continue to use mixin
      
      * fix imports
      
      * removed unsed imports
      
      * match shape instead of assuming image shapes
      
      * remove import typo
      
      * update call to add_noise
      
      * use math instead of numpy
      
      * fix t_index
      
      * removed commented out numpy tests
      
      * timesteps needs to be discrete
      
      * cast timesteps to int in flax scheduler too
      
      * fix device mismatch issue
      
      * small fix
      
      * Update src/diffusers/schedulers/scheduling_pndm.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      bd8df2da
    • Pedro Cuenca's avatar
      Remove deprecated `torch_device` kwarg (#623) · b671cb09
      Pedro Cuenca authored
      * Remove deprecated `torch_device` kwarg.
      
      * Remove unused imports.
      b671cb09
    • Yuta Hayashibe's avatar
      Warning for too long prompts in DiffusionPipelines (Resolve #447) (#472) · f7ebe569
      Yuta Hayashibe authored
      * Return encoded texts by DiffusionPipelines
      
      * Updated README to show hot to use enoded_text_input
      
      * Reverted examples in README.md
      
      * Reverted all
      
      * Warning for long prompts
      
      * Fix bugs
      
      * Formatted
      f7ebe569