1. 27 Feb, 2024 1 commit
    • Suraj Patil's avatar
      Add EDMEulerScheduler (#7109) · f57e7bd9
      Suraj Patil authored
      * Add EDMEulerScheduler
      
      * address review comments
      
      * fix import
      
      * fix test
      
      * add tests
      
      * add co-author
      
      Co-authored-by:  @dg845 dgu8957@gmail.com
      f57e7bd9
  2. 08 Feb, 2024 1 commit
  3. 01 Feb, 2024 1 commit
  4. 26 Jan, 2024 1 commit
  5. 20 Dec, 2023 1 commit
    • Beinsezii's avatar
      EulerAncestral add `rescale_betas_zero_snr` (#6187) · 457abdf2
      Beinsezii authored
      
      
      * EulerAncestral add `rescale_betas_zero_snr`
      
      Uses same infinite sigma fix from EulerDiscrete. Interestingly the
      ancestral version had the opposite problem: too much contrast instead of
      too little.
      
      * UT for EulerAncestral `rescale_betas_zero_snr`
      
      * EulerAncestral upcast samples during step()
      
      It helps this scheduler too, particularly when the model is using bf16.
      
      While the noise dtype is still the model's it's automatically upcasted
      for the add so all it affects is determinism.
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      457abdf2
  6. 15 Dec, 2023 1 commit
  7. 20 Nov, 2023 1 commit
  8. 11 Sep, 2023 1 commit
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016
  9. 23 Aug, 2023 1 commit
  10. 09 Aug, 2023 1 commit
    • Steven Liu's avatar
      [docs] Clean scheduler api (#4204) · 16ad13b6
      Steven Liu authored
      * clean scheduler mixin
      
      * up to dpmsolvermultistep
      
      * finish cleaning
      
      * first draft
      
      * fix overview table
      
      * apply feedback
      
      * update reference code
      16ad13b6
  11. 06 Jul, 2023 1 commit
    • YiYi Xu's avatar
      Add Shap-E (#3742) · 45f6d52b
      YiYi Xu authored
      
      
      * refactor prior_transformer
      
      adding conversion script
      
      add pipeline
      
      add step_index from pipeline, + remove permute
      
      add zero pad token
      
      remove copy from statement for betas_for_alpha_bar function
      
      * add
      
      * add
      
      * update conversion script for renderer model
      
      * refactor camera a little bit
      
      * clean up
      
      * style
      
      * fix copies
      
      * Update src/diffusers/schedulers/scheduling_heun_discrete.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * alpha_transform_type
      
      * remove step_index argument
      
      * remove get_sigmas_karras
      
      * remove _yiyi_sigma_to_t
      
      * move the rescale prompt_embeds from prior_transformer to pipeline
      
      * replace baddbmm with einsum to match origial repo
      
      * Revert "replace baddbmm with einsum to match origial repo"
      
      This reverts commit 3f6b435d65dad3e5514cad2f5dd9e4419ca78e0b.
      
      * add step_index to scale_model_input
      
      * Revert "move the rescale prompt_embeds from prior_transformer to pipeline"
      
      This reverts commit 5b5a8e6be918fefd114a2945ed89d8e8fa8be21b.
      
      * move rescale from prior_transformer to pipeline
      
      * correct step_index in scale_model_input
      
      * remove print lines
      
      * refactor prior - reduce arguments
      
      * make style
      
      * add prior_image
      
      * arg embedding_proj_norm -> norm_embedding_proj
      
      * add pre-norm for proj_embedding
      
      * move rescale prompt from pipeline to _encode_prompt
      
      * add img2img pipeline
      
      * style
      
      * copies
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      add arg: encoder_hid_proj
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      add new config: norm_in_type
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      add new config: added_emb_type
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      rename out_dim -> clip_embed_dim
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      
      rename config: out_dim -> clip_embed_dim
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/models/prior_transformer.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * finish refactor prior_tranformer
      
      * make style
      
      * refactor renderer
      
      * fix
      
      * make style
      
      * refactor img2img
      
      * remove params_proj
      
      * add test
      
      * add upcast_softmax to prior_transformer
      
      * enable num_images_per_prompt, add save_gif utility
      
      * add
      
      * add fast test
      
      * make style
      
      * add slow test
      
      * style
      
      * add test for img2img
      
      * refactor
      
      * enable batching
      
      * style
      
      * refactor scheduler
      
      * update test
      
      * style
      
      * attempt to solve batch related tests timeout
      
      * add doc
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * hardcode rendering related config
      
      * update betas_for_alpha_bar on ddpm_scheduler
      
      * fix copies
      
      * fix
      
      * export_to_gif
      
      * style
      
      * second attempt to speed up batching tests
      
      * add doc page to index
      
      * Remove intermediate clipping
      
      * 3rd attempt to speed up batching tests
      
      * Remvoe time index
      
      * simplify scheduler
      
      * Fix more
      
      * Fix more
      
      * fix more
      
      * make style
      
      * fix schedulers
      
      * fix some more tests
      
      * finish
      
      * add one more test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * style
      
      * apply feedbacks
      
      * style
      
      * fix copies
      
      * add one example
      
      * style
      
      * add example for img2img
      
      * fix doc
      
      * fix more doc strings
      
      * size -> frame_size
      
      * style
      
      * update doc
      
      * style
      
      * fix on doc
      
      * update repo name
      
      * improve the usage example in shap-e img2img
      
      * add usage examples in the shap-e docs.
      
      * consolidate examples.
      
      * minor fix.
      
      * update doc
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * remove upcast
      
      * Make sure background is white
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      
      * Apply suggestions from code review
      
      * Finish
      
      * Apply suggestions from code review
      
      * Update src/diffusers/pipelines/shap_e/pipeline_shap_e.py
      
      * Make style
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      45f6d52b
  12. 05 Jul, 2023 1 commit
    • Pedro Cuenca's avatar
      Add `timestep_spacing` and `steps_offset` to schedulers (#3947) · 07c9a08e
      Pedro Cuenca authored
      
      
      * Add timestep_spacing to DDPM, LMSDiscrete, PNDM.
      
      * Remove spurious line.
      
      * More easy schedulers.
      
      * Add `linspace` to DDIM
      
      * Noise sigma for `trailing`.
      
      * Add timestep_spacing to DEISMultistepScheduler.
      
      Not sure the range is the way it was intended.
      
      * Fix: remove line used to debug.
      
      * Support timestep_spacing in DPMSolverMultistep, DPMSolverSDE, UniPC
      
      * Fix: convert to numpy.
      
      * Use sched. defaults when instantiating from_config
      
      For params not present in the original configuration.
      
      This makes it possible to switch pipeline schedulers even if they use
      different timestep_spacing (or any other param).
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Missing args in DPMSolverMultistep
      
      * Test: default args not in config
      
      * Style
      
      * Fix scheduler name in test
      
      * Remove duplicated entries
      
      * Add test for solver_type
      
      This test currently fails in main. When switching from DEIS to UniPC,
      solver_type is "logrho" (the default value from DEIS), which gets
      translated to "bh1" by UniPC. This is different to the default value for
      UniPC: "bh2". This is where the translation happens: https://github.com/huggingface/diffusers/blob/36d22d0709dc19776e3016fb3392d0f5578b0ab2/src/diffusers/schedulers/scheduling_unipc_multistep.py#L171
      
      
      
      * UniPC: use same default for solver_type
      
      Fixes a bug when switching from UniPC from another scheduler (i.e.,
      DEIS) that uses a different solver type. The solver is now the same as
      if we had instantiated the scheduler directly.
      
      * do not save use default values
      
      * fix more
      
      * fix all
      
      * fix schedulers
      
      * fix more
      
      * finish for real
      
      * finish for real
      
      * flaky tests
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
      
      * Default steps_offset to 0.
      
      * Add missing docstrings
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      07c9a08e
  13. 10 Apr, 2023 1 commit
  14. 01 Mar, 2023 1 commit
  15. 16 Feb, 2023 1 commit
  16. 27 Jan, 2023 2 commits
  17. 17 Jan, 2023 1 commit
    • Kashif Rasul's avatar
      DiT Pipeline (#1806) · 37d113cc
      Kashif Rasul authored
      
      
      * added dit model
      
      * import
      
      * initial pipeline
      
      * initial convert script
      
      * initial pipeline
      
      * make style
      
      * raise valueerror
      
      * single function
      
      * rename classes
      
      * use DDIMScheduler
      
      * timesteps embedder
      
      * samples to cpu
      
      * fix var names
      
      * fix numpy type
      
      * use timesteps class for proj
      
      * fix typo
      
      * fix arg name
      
      * flip_sin_to_cos and better var names
      
      * fix C shape cal
      
      * make style
      
      * remove unused imports
      
      * cleanup
      
      * add back patch_size
      
      * initial dit doc
      
      * typo
      
      * Update docs/source/api/pipelines/dit.mdx
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * added copyright license headers
      
      * added example usage and toc
      
      * fix variable names asserts
      
      * remove comment
      
      * added docs
      
      * fix typo
      
      * upstream changes
      
      * set proper device for drop_ids
      
      * added initial dit pipeline test
      
      * update docs
      
      * fix imports
      
      * make fix-copies
      
      * isort
      
      * fix imports
      
      * get rid of more magic numbers
      
      * fix code when guidance is off
      
      * remove block_kwargs
      
      * cleanup script
      
      * removed to_2tuple
      
      * use FeedForward class instead of another MLP
      
      * style
      
      * work on mergint DiTBlock with BasicTransformerBlock
      
      * added missing final_dropout and args to BasicTransformerBlock
      
      * use norm from block
      
      * fix arg
      
      * remove unused arg
      
      * fix call to class_embedder
      
      * use timesteps
      
      * make style
      
      * attn_output gets multiplied
      
      * removed commented code
      
      * use Transformer2D
      
      * use self.is_input_patches
      
      * fix flags
      
      * fixed conversion to use Transformer2DModel
      
      * fixes for pipeline
      
      * remove dit.py
      
      * fix timesteps device
      
      * use randn_tensor and fix fp16 inf.
      
      * timesteps_emb already the right dtype
      
      * fix dit test class
      
      * fix test and style
      
      * fix norm2 usage in vq-diffusion
      
      * added author names to pipeline and lmagenet labels link
      
      * fix tests
      
      * use norm_type as string
      
      * rename dit to transformer
      
      * fix name
      
      * fix test
      
      * set  norm_type = "layer" by default
      
      * fix tests
      
      * do not skip common tests
      
      * Update src/diffusers/models/attention.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * revert AdaLayerNorm API
      
      * fix norm_type name
      
      * make sure all components are in eval mode
      
      * revert norm2 API
      
      * compact
      
      * finish deprecation
      
      * add slow tests
      
      * remove @
      
      * refactor some stuff
      
      * upload
      
      * Update src/diffusers/pipelines/dit/pipeline_dit.py
      
      * finish more
      
      * finish docs
      
      * improve docs
      
      * finish docs
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      37d113cc
  18. 16 Jan, 2023 1 commit
  19. 04 Jan, 2023 1 commit
    • Patrick von Platen's avatar
      Improve reproduceability 2/3 (#1906) · 9b638548
      Patrick von Platen authored
      * [Repro] Correct reproducability
      
      * up
      
      * up
      
      * uP
      
      * up
      
      * need better image
      
      * allow conversion from no state dict checkpoints
      
      * up
      
      * up
      
      * up
      
      * up
      
      * check tensors
      
      * check tensors
      
      * check tensors
      
      * check tensors
      
      * next try
      
      * up
      
      * up
      
      * better name
      
      * up
      
      * up
      
      * Apply suggestions from code review
      
      * correct more
      
      * up
      
      * replace all torch randn
      
      * fix
      
      * correct
      
      * correct
      
      * finish
      
      * fix more
      
      * up
      9b638548
  20. 02 Dec, 2022 1 commit
  21. 01 Dec, 2022 1 commit
  22. 30 Nov, 2022 1 commit
  23. 28 Nov, 2022 1 commit
    • Patrick von Platen's avatar
      Add 2nd order heun scheduler (#1336) · 4c54519e
      Patrick von Platen authored
      * Add heun
      
      * Finish first version of heun
      
      * remove bogus
      
      * finish
      
      * finish
      
      * improve
      
      * up
      
      * up
      
      * fix more
      
      * change progress bar
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
      
      * finish
      
      * up
      
      * up
      
      * up
      4c54519e
  24. 22 Nov, 2022 1 commit
  25. 15 Nov, 2022 1 commit
  26. 09 Nov, 2022 1 commit
  27. 08 Nov, 2022 1 commit
    • Pedro Cuenca's avatar
      MPS schedulers: don't use float64 (#1169) · 813744e5
      Pedro Cuenca authored
      * Schedulers: don't use float64 on mps
      
      * Test set_timesteps() on device (float schedulers).
      
      * SD pipeline: use device in set_timesteps.
      
      * SD in-painting pipeline: use device in set_timesteps.
      
      * Tests: fix mps crashes.
      
      * Skip test_load_pipeline_from_git on mps.
      
      Not compatible with float16.
      
      * Use device.type instead of str in Euler schedulers.
      813744e5
  28. 06 Nov, 2022 1 commit
    • Cheng Lu's avatar
      Add multistep DPM-Solver discrete scheduler (#1132) · b4a1ed85
      Cheng Lu authored
      
      
      * add dpmsolver discrete pytorch scheduler
      
      * fix some typos in dpm-solver pytorch
      
      * add dpm-solver pytorch in stable-diffusion pipeline
      
      * add jax/flax version dpm-solver
      
      * change code style
      
      * change code style
      
      * add docs
      
      * add `add_noise` method for dpmsolver
      
      * add pytorch unit test for dpmsolver
      
      * add dummy object for pytorch dpmsolver
      
      * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Update tests/test_config.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * resolve the code comments
      
      * rename the file
      
      * change class name
      
      * fix code style
      
      * add auto docs for dpmsolver multistep
      
      * add more explanations for the stabilizing trick (for steps < 15)
      
      * delete the dummy file
      
      * change the API name of predict_epsilon, algorithm_type and solver_type
      
      * add compatible lists
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      b4a1ed85
  29. 04 Nov, 2022 1 commit
  30. 03 Nov, 2022 1 commit
  31. 31 Oct, 2022 2 commits
  32. 27 Oct, 2022 1 commit
    • Pedro Cuenca's avatar
      Continuation of #942: additional float64 failure (#996) · 1d04e1b4
      Pedro Cuenca authored
      * Add failing test for #940.
      
      * Do not use torch.float64 in mps.
      
      * style
      
      * Temporarily skip add_noise for IPNDMScheduler.
      
      Until #990 is addressed.
      
      * Fix additional float64 error in mps.
      
      * Improve add_noise test
      
      * Slight edit – I think it's clearer this way.
      1d04e1b4
  33. 26 Oct, 2022 1 commit
  34. 20 Oct, 2022 2 commits
  35. 14 Oct, 2022 1 commit
  36. 07 Oct, 2022 2 commits