- 06 Oct, 2022 3 commits
-
-
Anton Lozhkov authored
Temporarily remove Flax modules from the public API
-
Anton Lozhkov authored
* Better steps deprecation for LMS * upd
-
Anton Lozhkov authored
* Add back-compatibility to LMS timesteps * style
-
- 05 Oct, 2022 2 commits
-
-
Anton Lozhkov authored
* init * improve add_noise * [debug start] run slow test * [debug end] * quick revert * Add docstrings and warnings + API tests * Make the warning less spammy
-
Kashif Rasul authored
* pytorch timesteps * style * get rid of if-else * fix test Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 04 Oct, 2022 1 commit
-
-
Tanishq Abraham authored
* Update links in schedulers README.md * Update src/diffusers/schedulers/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 03 Oct, 2022 3 commits
-
-
Patrick von Platen authored
* [Utils] Add deprecate function * up * up * uP * up * up * up * up * uP * up * fix * up * move to deprecation utils file * fix * fix * fix more
-
Pedro Cuenca authored
* Don't use `load_state_dict` if torch is not installed. * Define `SchedulerOutput` to use torch or flax arrays. * Don't import LMSDiscreteScheduler without torch. * Create distinct FlaxSchedulerOutput. * Additional changes required for FlaxSchedulerMixin * Do not import torch pipelines in Flax. * Revert "Define `SchedulerOutput` to use torch or flax arrays." This reverts commit f653140134b74d9ffec46d970eb46925fe3a409d. * Prefix Flax scheduler outputs for consistency. * make style * FlaxSchedulerOutput is now a dataclass. * Don't use f-string without placeholders. * Add blank line. * Style (docstrings)
-
Pedro Cuenca authored
* Flax: add shape argument to set_timesteps * style
-
- 30 Sep, 2022 1 commit
-
-
Nouamane Tazi authored
* initial commit * make UNet stream capturable * try to fix noise_pred value * remove cuda graph and keep NB * non blocking unet with PNDMScheduler * make timesteps np arrays for pndm scheduler because lists don't get formatted to tensors in `self.set_format` * make max async in pndm * use channel last format in unet * avoid moving timesteps device in each unet call * avoid memcpy op in `get_timestep_embedding` * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained` * update TODO * replace `channels_last` kwarg with `memory_format` for more generality * revert the channels_last changes to leave it for another PR * remove non_blocking when moving input ids to device * remove blocking from all .to() operations at beginning of pipeline * fix merging * fix merging * model can run in other precisions without autocast * attn refactoring * Revert "attn refactoring" This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb. * remove restriction to run conv_norm in fp32 * use `baddbmm` instead of `matmul`for better in attention for better perf * removing all reshapes to test perf * Revert "removing all reshapes to test perf" This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd. * add shapes comments * hardcore whats needed for jitting * Revert "hardcore whats needed for jitting" This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a. * Revert "remove restriction to run conv_norm in fp32" This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9. * revert using baddmm in attention's forward * cleanup comment * remove restriction to run conv_norm in fp32. no quality loss was noticed This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213. * add more optimizations techniques to docs * Revert "add shapes comments" This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4. * apply suggestions * make quality * apply suggestions * styling * `scheduler.timesteps` are now arrays so we dont need .to() * remove useless .type() * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms` * move scheduler timestamps to correct device if tensors * add device to `set_timesteps` in LMSD scheduler * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it * quick fix * styling * remove kwargs from schedulers `set_timesteps` * revert to using max in K-LMS inpaint pipeline test * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it" This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899. * move timesteps to correct device before loop in SD pipeline * apply previous fix to other SD pipelines * UNet now accepts tensor timesteps even on wrong device, to avoid errors - it shouldnt affect performance if timesteps are alrdy on correct device - it does slow down performance if they're on the wrong device * fix pipeline when timesteps are arrays with strides
-
- 29 Sep, 2022 1 commit
-
-
V Vishnu Anirudh authored
* correcting the beta value assignment * updating DDIM and LMSDiscreteFlax schedulers * bringing back the changes that were lost as part of main branch merge
-
- 28 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* Fix the LMS pytorch regression * Copy over the changes from #637 * Copy over the changes from #637 * Fix betas test
-
- 27 Sep, 2022 5 commits
-
-
Kashif Rasul authored
* add dep. warning for schedulers * fix format
-
Suraj Patil authored
fix add noise
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline * todo comment * Fix imports * Fix imports * add dummies * Fix empty init * make pipeline work * up * Allow dtype to be overridden on model load. This may be a temporary solution until #567 is addressed. * Convert params to bfloat16 or fp16 after loading. This deals with the weights, not the model. * Use Flax schedulers (typing, docstring) * PNDM: replace control flow with jax functions. Otherwise jitting/parallelization don't work properly as they don't know how to deal with traced objects. I temporarily removed `step_prk`. * Pass latents shape to scheduler set_timesteps() PNDMScheduler uses it to reserve space, other schedulers will just ignore it. * Wrap model imports inside availability checks. * Optionally return state in from_config. Useful for Flax schedulers. * Do not convert model weights to dtype. * Re-enable PRK steps with functional implementation. Values returned still not verified for correctness. * Remove left over has_state var. * make style * Apply suggestion list -> tuple Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestion list -> tuple Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Remove unused comments. * Use zeros instead of empty. Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Pedro Cuenca authored
-
- 24 Sep, 2022 1 commit
-
-
Grigory Sizov authored
fix formula for noise levels in karras scheduler and tests
-
- 22 Sep, 2022 1 commit
-
-
Jonathan Whitaker authored
* Adding pred_original_sample to SchedulerOutput of DDPMScheduler, DDIMScheduler, LMSDiscreteScheduler, KarrasVeScheduler step methods so we can access the predicted denoised outputs * Gave DDPMScheduler, DDIMScheduler and LMSDiscreteScheduler their own output dataclasses so the default SchedulerOutput in scheduling_utils does not need pred_original_sample as an optional extra * Reordered library imports to follow standard * didnt get import order quite right apparently * Forgot to change name of LMSDiscreteSchedulerOutput * Aha, needed some extra libs for make style to fully work
-
- 21 Sep, 2022 2 commits
-
-
Ryan Russell authored
Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Pedro Cuenca authored
* Optionally return state in from_config. Useful for Flax schedulers. * has_state is now a property, make check more strict. I don't check the class is `SchedulerMixin` to prevent circular dependencies. It should be enough that the class name starts with "Flax" the object declares it "has_state" and the "create_state" exists too. * Use state in pipeline from_pretrained. * Make style
-
- 20 Sep, 2022 2 commits
-
-
Patrick von Platen authored
* [Flax] Fix unet and ddim scheduler * correct * finish
-
Mishig Davaadorj authored
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline * todo comment * Fix imports * Fix imports * add dummies * Fix empty init * make pipeline work * up * Use Flax schedulers (typing, docstring) * Wrap model imports inside availability checks. * more updates * make sure flax is not broken * make style * more fixes * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@latenitesoft.com>
-
- 19 Sep, 2022 2 commits
-
-
Yuta Hayashibe authored
* Fix a setting bug * Fix typos * Reverted params to parms
-
Kashif Rasul authored
* remove match_shape * ported fixes from #479 to flax * remove unused argument * typo * remove warnings
-
- 17 Sep, 2022 1 commit
-
-
Jonatan Kłosko authored
* Unify offset configuration in DDIM and PNDM schedulers * Format Add missing variables * Fix pipeline test * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Default set_alpha_to_one to false * Format * Add tests * Format * add deprecation warning Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Sep, 2022 3 commits
-
-
Patrick von Platen authored
Revert "adding more typehints to DDIM scheduler (#456)" This reverts commit a0558b11.
-
V Vishnu Anirudh authored
* adding more typehints * resolving mypy issues * resolving formatting issue * fixing isort issue Co-authored-by:
V Vishnu Anirudh <git.vva@gmail.com> Co-authored-by:
V Vishnu Anirudh <vvani@kth.se>
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
- 15 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* beta never changes removed from state * fix typos in docs * removed unused var * initial ddim flax scheduler * import * added dummy objects * fix style * fix typo * docs * fix typo in comment * set return type * added flax ddom * fix style * remake * pass PRNG key as argument and split before use * fix doc string * use config * added flax Karras VE scheduler * make style * fix dummy * fix ndarray type annotation * replace returns a new state * added lms_discrete scheduler * use self.config * add_noise needs state * use config * use config * docstring * added flax score sde ve * fix imports * fix typos
-
- 14 Sep, 2022 1 commit
-
-
Pedro Cuenca authored
* Fix LMS scheduler indexing in `add_noise` #358. * Fix DDIM and DDPM indexing with mps device. * Verify format is PyTorch before using `.to()`
-
- 13 Sep, 2022 3 commits
-
-
Kashif Rasul authored
* initial flax pndm * fix typo * use state * return state * add FlaxSchedulerOutput * fix style * add flax imports * make style * fix typos * return created state * make style * add torch/flax imports * docs * fixed typo * remove tensor_format * round instead of cast * ets is jnp array * remove copy
-
Nathan Lambert authored
* initial attempt at solving * fix pndm power of 3 inference_step * add power of 3 test * fix index in pndm test, remove ddim test * add comments, change to round()
-
Nathan Lambert authored
* update scheduler docs TODOs, fix typos * fix another typo
-
- 08 Sep, 2022 5 commits
-
-
Patrick von Platen authored
* Update black * update table
-
Patrick von Platen authored
* [Outputs] Improve syntax * improve more * fix docstring return * correct all * uP Co-authored-by:Mishig Davaadorj <dmishig@gmail.com>
-
Pedro Cuenca authored
* Initial support for mps in Stable Diffusion pipeline. * Initial "warmup" implementation when using mps. * Make some deterministic tests pass with mps. * Disable training tests when using mps. * SD: generate latents in CPU then move to device. This is especially important when using the mps device, because generators are not supported there. See for example https://github.com/pytorch/pytorch/issues/84288. In addition, the other pipelines seem to use the same approach: generate the random samples then move to the appropriate device. After this change, generating an image in MPS produces the same result as when using the CPU, if the same seed is used. * Remove prints. * Pass AutoencoderKL test_output_pretrained with mps. Sampling from `posterior` must be done in CPU. * Style * Do not use torch.long for log op in mps device. * Perform incompatible padding ops in CPU. UNet tests now pass. See https://github.com/pytorch/pytorch/issues/84535 * Style: fix import order. * Remove unused symbols. * Remove MPSWarmupMixin, do not apply automatically. We do apply warmup in the tests, but not during normal use. This adopts some PR suggestions by @patrickvonplaten. * Add comment for mps fallback to CPU step. * Add README_mps.md for mps installation and use. * Apply `black` to modified files. * Restrict README_mps to SD, show measures in table. * Make PNDM indexing compatible with mps. Addresses #239. * Do not use float64 when using LDMScheduler. Fixes #358. * Fix typo identified by @patil-suraj Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Adapt example to new output style. * Restore 1:1 results reproducibility with CompVis. However, mps latents need to be generated in CPU because generators don't work in the mps device. * Move PyTorch nightly to requirements. * Adapt `test_scheduler_outputs_equivalence` ton MPS. * mps: skip training tests instead of ignoring silently. * Make VQModel tests pass on mps. * mps ddim tests: warmup, increase tolerance. * ScoreSdeVeScheduler indexing made mps compatible. * Make ldm pipeline tests pass using warmup. * Style * Simplify casting as suggested in PR. * Add Known Issues to readme. * `isort` import order. * Remove _mps_warmup helpers from ModelMixin. And just make changes to the tests. * Skip tests using unittest decorator for consistency. * Remove temporary var. * Remove spurious blank space. * Remove unused symbol. * Remove README_mps. Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
-
Daniel Hug authored
Add typing to scheduling_sde_ve init, set_timesteps, and set_sigmas functions Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nathan Lambert authored
* init schedulers docs * add some docstrings, fix sidebar formatting * add docstrings * [Type hint] PNDM schedulers (#335) * [Type hint] PNDM Schedulers * ran make style * updated timesteps type hint * apply suggestions from code review * ran make style * removed unused import * [Type hint] scheduling ddim (#343) * [Type hint] scheduling ddim * apply suggestions from code review apply suggestions to also return the return type Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * make style * update class docstrings * add docstrings * missed merge edit * add general docs page * modify headings for right sidebar Co-authored-by:
Partho <parthodas6176@gmail.com> Co-authored-by:
Santiago Víquez <santi.viquez@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 05 Sep, 2022 1 commit
-
-
Santiago Víquez authored
* [Type hint] scheduling karras ve * [Type hint] scheduling lms discrete
-