1. 30 Sep, 2022 1 commit
  2. 29 Sep, 2022 1 commit
  3. 28 Sep, 2022 1 commit
  4. 27 Sep, 2022 1 commit
    • Kashif Rasul's avatar
      [Pytorch] Pytorch only schedulers (#534) · bd8df2da
      Kashif Rasul authored
      
      
      * pytorch only schedulers
      
      * fix style
      
      * remove match_shape
      
      * pytorch only ddpm
      
      * remove SchedulerMixin
      
      * remove numpy from karras_ve
      
      * fix types
      
      * remove numpy from lms_discrete
      
      * remove numpy from pndm
      
      * fix typo
      
      * remove mixin and numpy from sde_vp and ve
      
      * remove remaining tensor_format
      
      * fix style
      
      * sigmas has to be torch tensor
      
      * removed set_format in readme
      
      * remove set format from docs
      
      * remove set_format from pipelines
      
      * update tests
      
      * fix typo
      
      * continue to use mixin
      
      * fix imports
      
      * removed unsed imports
      
      * match shape instead of assuming image shapes
      
      * remove import typo
      
      * update call to add_noise
      
      * use math instead of numpy
      
      * fix t_index
      
      * removed commented out numpy tests
      
      * timesteps needs to be discrete
      
      * cast timesteps to int in flax scheduler too
      
      * fix device mismatch issue
      
      * small fix
      
      * Update src/diffusers/schedulers/scheduling_pndm.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      bd8df2da
  5. 24 Sep, 2022 1 commit
  6. 22 Sep, 2022 1 commit
    • Suraj Patil's avatar
      [UNet2DConditionModel] add gradient checkpointing (#461) · e7120bae
      Suraj Patil authored
      * add grad ckpt to downsample blocks
      
      * make it work
      
      * don't pass gradient_checkpointing to upsample block
      
      * add tests for UNet2DConditionModel
      
      * add test_gradient_checkpointing
      
      * add gradient_checkpointing for up and down blocks
      
      * add functions to enable and disable grad ckpt
      
      * remove the forward argument
      
      * better naming
      
      * make supports_gradient_checkpointing private
      e7120bae
  7. 21 Sep, 2022 2 commits
  8. 20 Sep, 2022 2 commits
  9. 19 Sep, 2022 2 commits
  10. 17 Sep, 2022 1 commit
  11. 16 Sep, 2022 7 commits
  12. 15 Sep, 2022 2 commits
    • Suraj Patil's avatar
      [UNet2DConditionModel, UNet2DModel] pass norm_num_groups to all the blocks (#442) · d144c46a
      Suraj Patil authored
      * pass norm_num_groups to unet blocs and attention
      
      * fix UNet2DConditionModel
      
      * add norm_num_groups arg in vae
      
      * add tests
      
      * remove comment
      
      * Apply suggestions from code review
      d144c46a
    • Kashif Rasul's avatar
      Karras VE, DDIM and DDPM flax schedulers (#508) · b34be039
      Kashif Rasul authored
      * beta never changes removed from state
      
      * fix typos in docs
      
      * removed unused var
      
      * initial ddim flax scheduler
      
      * import
      
      * added dummy objects
      
      * fix style
      
      * fix typo
      
      * docs
      
      * fix typo in comment
      
      * set return type
      
      * added flax ddom
      
      * fix style
      
      * remake
      
      * pass PRNG key as argument and split before use
      
      * fix doc string
      
      * use config
      
      * added flax Karras VE scheduler
      
      * make style
      
      * fix dummy
      
      * fix ndarray type annotation
      
      * replace returns a new state
      
      * added lms_discrete scheduler
      
      * use self.config
      
      * add_noise needs state
      
      * use config
      
      * use config
      
      * docstring
      
      * added flax score sde ve
      
      * fix imports
      
      * fix typos
      b34be039
  13. 13 Sep, 2022 3 commits
  14. 12 Sep, 2022 1 commit
    • Kashif Rasul's avatar
      update expected results of slow tests (#268) · f4781a0b
      Kashif Rasul authored
      
      
      * update expected results of slow tests
      
      * relax sum and mean tests
      
      * Print shapes when reporting exception
      
      * formatting
      
      * fix sentence
      
      * relax test_stable_diffusion_fast_ddim for gpu fp16
      
      * relax flakey tests on GPU
      
      * added comment on large tolerences
      
      * black
      
      * format
      
      * set scheduler seed
      
      * added generator
      
      * use np.isclose
      
      * set num_inference_steps to 50
      
      * fix dep. warning
      
      * update expected_slice
      
      * preprocess if image
      
      * updated expected results
      
      * updated expected from CI
      
      * pass generator to VAE
      
      * undo change back to orig
      
      * use orignal
      
      * revert back the expected on cpu
      
      * revert back values for CPU
      
      * more undo
      
      * update result after using gen
      
      * update mean
      
      * set generator for mps
      
      * update expected on CI server
      
      * undo
      
      * use new seed every time
      
      * cpu manual seed
      
      * reduce num_inference_steps
      
      * style
      
      * use generator for randn
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      f4781a0b
  15. 08 Sep, 2022 4 commits
    • Patrick von Platen's avatar
      [Tests] Correct image folder tests (#427) · f55190b2
      Patrick von Platen authored
      * [Tests] Correct image folder tests
      
      * up
      f55190b2
    • Anton Lozhkov's avatar
      [ONNX] Stable Diffusion exporter and pipeline (#399) · 8d9c4a53
      Anton Lozhkov authored
      
      
      * initial export and design
      
      * update imports
      
      * custom prover, import fixes
      
      * Update src/diffusers/onnx_utils.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/onnx_utils.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * remove push_to_hub
      
      * Update src/diffusers/onnx_utils.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * remove torch_device
      
      * numpify the rest of the pipeline
      
      * torchify the safety checker
      
      * revert tensor
      
      * Code review suggestions + quality
      
      * fix tests
      
      * fix provider, add an end-to-end test
      
      * style
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      8d9c4a53
    • Anton Lozhkov's avatar
      7bcc873b
    • Pedro Cuenca's avatar
      Inference support for `mps` device (#355) · 5dda1735
      Pedro Cuenca authored
      * Initial support for mps in Stable Diffusion pipeline.
      
      * Initial "warmup" implementation when using mps.
      
      * Make some deterministic tests pass with mps.
      
      * Disable training tests when using mps.
      
      * SD: generate latents in CPU then move to device.
      
      This is especially important when using the mps device, because
      generators are not supported there. See for example
      https://github.com/pytorch/pytorch/issues/84288.
      
      In addition, the other pipelines seem to use the same approach: generate
      the random samples then move to the appropriate device.
      
      After this change, generating an image in MPS produces the same result
      as when using the CPU, if the same seed is used.
      
      * Remove prints.
      
      * Pass AutoencoderKL test_output_pretrained with mps.
      
      Sampling from `posterior` must be done in CPU.
      
      * Style
      
      * Do not use torch.long for log op in mps device.
      
      * Perform incompatible padding ops in CPU.
      
      UNet tests now pass.
      See https://github.com/pytorch/pytorch/issues/84535
      
      
      
      * Style: fix import order.
      
      * Remove unused symbols.
      
      * Remove MPSWarmupMixin, do not apply automatically.
      
      We do apply warmup in the tests, but not during normal use.
      This adopts some PR suggestions by @patrickvonplaten.
      
      * Add comment for mps fallback to CPU step.
      
      * Add README_mps.md for mps installation and use.
      
      * Apply `black` to modified files.
      
      * Restrict README_mps to SD, show measures in table.
      
      * Make PNDM indexing compatible with mps.
      
      Addresses #239.
      
      * Do not use float64 when using LDMScheduler.
      
      Fixes #358.
      
      * Fix typo identified by @patil-suraj
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Adapt example to new output style.
      
      * Restore 1:1 results reproducibility with CompVis.
      
      However, mps latents need to be generated in CPU because generators
      don't work in the mps device.
      
      * Move PyTorch nightly to requirements.
      
      * Adapt `test_scheduler_outputs_equivalence` ton MPS.
      
      * mps: skip training tests instead of ignoring silently.
      
      * Make VQModel tests pass on mps.
      
      * mps ddim tests: warmup, increase tolerance.
      
      * ScoreSdeVeScheduler indexing made mps compatible.
      
      * Make ldm pipeline tests pass using warmup.
      
      * Style
      
      * Simplify casting as suggested in PR.
      
      * Add Known Issues to readme.
      
      * `isort` import order.
      
      * Remove _mps_warmup helpers from ModelMixin.
      
      And just make changes to the tests.
      
      * Skip tests using unittest decorator for consistency.
      
      * Remove temporary var.
      
      * Remove spurious blank space.
      
      * Remove unused symbol.
      
      * Remove README_mps.
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> 
      5dda1735
  16. 06 Sep, 2022 2 commits
  17. 05 Sep, 2022 1 commit
  18. 03 Sep, 2022 1 commit
  19. 02 Sep, 2022 1 commit
  20. 01 Sep, 2022 1 commit
  21. 31 Aug, 2022 3 commits
  22. 30 Aug, 2022 1 commit