1. 27 Jan, 2023 1 commit
  2. 26 Jan, 2023 1 commit
  3. 25 Jan, 2023 1 commit
  4. 17 Jan, 2023 1 commit
    • Kashif Rasul's avatar
      DiT Pipeline (#1806) · 37d113cc
      Kashif Rasul authored
      
      
      * added dit model
      
      * import
      
      * initial pipeline
      
      * initial convert script
      
      * initial pipeline
      
      * make style
      
      * raise valueerror
      
      * single function
      
      * rename classes
      
      * use DDIMScheduler
      
      * timesteps embedder
      
      * samples to cpu
      
      * fix var names
      
      * fix numpy type
      
      * use timesteps class for proj
      
      * fix typo
      
      * fix arg name
      
      * flip_sin_to_cos and better var names
      
      * fix C shape cal
      
      * make style
      
      * remove unused imports
      
      * cleanup
      
      * add back patch_size
      
      * initial dit doc
      
      * typo
      
      * Update docs/source/api/pipelines/dit.mdx
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * added copyright license headers
      
      * added example usage and toc
      
      * fix variable names asserts
      
      * remove comment
      
      * added docs
      
      * fix typo
      
      * upstream changes
      
      * set proper device for drop_ids
      
      * added initial dit pipeline test
      
      * update docs
      
      * fix imports
      
      * make fix-copies
      
      * isort
      
      * fix imports
      
      * get rid of more magic numbers
      
      * fix code when guidance is off
      
      * remove block_kwargs
      
      * cleanup script
      
      * removed to_2tuple
      
      * use FeedForward class instead of another MLP
      
      * style
      
      * work on mergint DiTBlock with BasicTransformerBlock
      
      * added missing final_dropout and args to BasicTransformerBlock
      
      * use norm from block
      
      * fix arg
      
      * remove unused arg
      
      * fix call to class_embedder
      
      * use timesteps
      
      * make style
      
      * attn_output gets multiplied
      
      * removed commented code
      
      * use Transformer2D
      
      * use self.is_input_patches
      
      * fix flags
      
      * fixed conversion to use Transformer2DModel
      
      * fixes for pipeline
      
      * remove dit.py
      
      * fix timesteps device
      
      * use randn_tensor and fix fp16 inf.
      
      * timesteps_emb already the right dtype
      
      * fix dit test class
      
      * fix test and style
      
      * fix norm2 usage in vq-diffusion
      
      * added author names to pipeline and lmagenet labels link
      
      * fix tests
      
      * use norm_type as string
      
      * rename dit to transformer
      
      * fix name
      
      * fix test
      
      * set  norm_type = "layer" by default
      
      * fix tests
      
      * do not skip common tests
      
      * Update src/diffusers/models/attention.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * revert AdaLayerNorm API
      
      * fix norm_type name
      
      * make sure all components are in eval mode
      
      * revert norm2 API
      
      * compact
      
      * finish deprecation
      
      * add slow tests
      
      * remove @
      
      * refactor some stuff
      
      * upload
      
      * Update src/diffusers/pipelines/dit/pipeline_dit.py
      
      * finish more
      
      * finish docs
      
      * improve docs
      
      * finish docs
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      37d113cc
  5. 16 Jan, 2023 1 commit
  6. 12 Jan, 2023 1 commit
  7. 04 Jan, 2023 1 commit
    • Patrick von Platen's avatar
      Improve reproduceability 2/3 (#1906) · 9b638548
      Patrick von Platen authored
      * [Repro] Correct reproducability
      
      * up
      
      * up
      
      * uP
      
      * up
      
      * need better image
      
      * allow conversion from no state dict checkpoints
      
      * up
      
      * up
      
      * up
      
      * up
      
      * check tensors
      
      * check tensors
      
      * check tensors
      
      * check tensors
      
      * next try
      
      * up
      
      * up
      
      * better name
      
      * up
      
      * up
      
      * Apply suggestions from code review
      
      * correct more
      
      * up
      
      * replace all torch randn
      
      * fix
      
      * correct
      
      * correct
      
      * finish
      
      * fix more
      
      * up
      9b638548
  8. 02 Jan, 2023 1 commit
  9. 30 Dec, 2022 1 commit
  10. 17 Dec, 2022 1 commit
  11. 13 Dec, 2022 1 commit
  12. 12 Dec, 2022 1 commit
  13. 05 Dec, 2022 2 commits
  14. 02 Dec, 2022 2 commits
  15. 29 Nov, 2022 1 commit
    • Ilmari Heikkinen's avatar
      StableDiffusion: Decode latents separately to run larger batches (#1150) · c28d3c82
      Ilmari Heikkinen authored
      
      
      * StableDiffusion: Decode latents separately to run larger batches
      
      * Move VAE sliced decode under enable_vae_sliced_decode and vae.enable_sliced_decode
      
      * Rename sliced_decode to slicing
      
      * fix whitespace
      
      * fix quality check and repository consistency
      
      * VAE slicing tests and documentation
      
      * API doc hooks for VAE slicing
      
      * reformat vae slicing tests
      
      * Skip VAE slicing for one-image batches
      
      * Documentation tweaks for VAE slicing
      Co-authored-by: default avatarIlmari Heikkinen <ilmari@fhtr.org>
      c28d3c82
  16. 28 Nov, 2022 2 commits
  17. 25 Nov, 2022 2 commits
  18. 24 Nov, 2022 5 commits
  19. 22 Nov, 2022 1 commit
  20. 16 Nov, 2022 1 commit
  21. 15 Nov, 2022 1 commit
    • Patrick von Platen's avatar
      Add AltDiffusion (#1299) · 8a730645
      Patrick von Platen authored
      
      
      * add conversion script for vae
      
      * up
      
      * up
      
      * some fixes
      
      * add text model
      
      * use the correct config
      
      * add docs
      
      * move model in it's own file
      
      * move model in its own file
      
      * pass attenion mask to text encoder
      
      * pass attn mask to uncond inputs
      
      * quality
      
      * fix image2image
      
      * add imag2image in init
      
      * fix import
      
      * fix one more import
      
      * fix import, dummy objetcs
      
      * fix copied from
      
      * up
      
      * finish
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      8a730645
  22. 13 Nov, 2022 2 commits
  23. 09 Nov, 2022 3 commits
  24. 08 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      MPS schedulers: don't use float64 (#1169) · 813744e5
      Pedro Cuenca authored
      * Schedulers: don't use float64 on mps
      
      * Test set_timesteps() on device (float schedulers).
      
      * SD pipeline: use device in set_timesteps.
      
      * SD in-painting pipeline: use device in set_timesteps.
      
      * Tests: fix mps crashes.
      
      * Skip test_load_pipeline_from_git on mps.
      
      Not compatible with float16.
      
      * Use device.type instead of str in Euler schedulers.
      813744e5
  25. 07 Nov, 2022 1 commit
  26. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  27. 03 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      Continuation of #1035 (#1120) · 269109db
      Pedro Cuenca authored
      
      
      * remove batch size from repeat
      
      * repeat empty string if uncond_tokens is none
      
      * fix inpaint pipes
      
      * return back whitespace to pass code quality
      
      * Apply suggestions from code review
      
      * Fix typos.
      Co-authored-by: default avatarHad <had-95@yandex.ru>
      269109db
  28. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  29. 31 Oct, 2022 1 commit