- 03 Oct, 2022 6 commits
-
-
Patrick von Platen authored
* [Utils] Add deprecate function * up * up * uP * up * up * up * up * uP * up * fix * up * move to deprecation utils file * fix * fix * fix more
-
Anton Lozhkov authored
* [CI] Localize the HF cache * pip cache * de-env * refactor matrix * fix fast cache * less onnx steps * revert * revert pip cache * revert pip cache * remove debugging trigger
-
Patrick von Platen authored
-
Pedro Cuenca authored
* Don't use `load_state_dict` if torch is not installed. * Define `SchedulerOutput` to use torch or flax arrays. * Don't import LMSDiscreteScheduler without torch. * Create distinct FlaxSchedulerOutput. * Additional changes required for FlaxSchedulerMixin * Do not import torch pipelines in Flax. * Revert "Define `SchedulerOutput` to use torch or flax arrays." This reverts commit f653140134b74d9ffec46d970eb46925fe3a409d. * Prefix Flax scheduler outputs for consistency. * make style * FlaxSchedulerOutput is now a dataclass. * Don't use f-string without placeholders. * Add blank line. * Style (docstrings)
-
Krishna Penukonda authored
Fixed type annotations on StableDiffusionPipeline::__call__
-
Pedro Cuenca authored
* Flax: add shape argument to set_timesteps * style
-
- 02 Oct, 2022 1 commit
-
-
James R T authored
* Add callback parameters for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Lint code with `black --preview` Signed-off-by:
James R T <jamestiotio@gmail.com> * Refactor callback implementation for Stable Diffusion pipelines * Fix missing imports Signed-off-by:
James R T <jamestiotio@gmail.com> * Fix documentation format Signed-off-by:
James R T <jamestiotio@gmail.com> * Add kwargs parameter to standardize with other pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Modify Stable Diffusion pipeline callback parameters Signed-off-by:
James R T <jamestiotio@gmail.com> * Remove useless imports Signed-off-by:
James R T <jamestiotio@gmail.com> * Change types for timestep and onnx latents * Fix docstring style * Return decode_latents and run_safety_checker back into __call__ * Remove unused imports * Add intermediate state tests for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Fix intermediate state tests for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> Signed-off-by:
James R T <jamestiotio@gmail.com>
-
- 30 Sep, 2022 4 commits
-
-
Nouamane Tazi authored
* revert using baddbmm in attention - to fix `test_stable_diffusion_memory_chunking` test * styling
-
Ryan Russell authored
refactor: update ldm-bert `config.json` url Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Josh Achiam authored
* Allow resolutions that are not multiples of 64 * ran black * fix bug * add test * more explanation * more comments Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nouamane Tazi authored
* initial commit * make UNet stream capturable * try to fix noise_pred value * remove cuda graph and keep NB * non blocking unet with PNDMScheduler * make timesteps np arrays for pndm scheduler because lists don't get formatted to tensors in `self.set_format` * make max async in pndm * use channel last format in unet * avoid moving timesteps device in each unet call * avoid memcpy op in `get_timestep_embedding` * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained` * update TODO * replace `channels_last` kwarg with `memory_format` for more generality * revert the channels_last changes to leave it for another PR * remove non_blocking when moving input ids to device * remove blocking from all .to() operations at beginning of pipeline * fix merging * fix merging * model can run in other precisions without autocast * attn refactoring * Revert "attn refactoring" This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb. * remove restriction to run conv_norm in fp32 * use `baddbmm` instead of `matmul`for better in attention for better perf * removing all reshapes to test perf * Revert "removing all reshapes to test perf" This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd. * add shapes comments * hardcore whats needed for jitting * Revert "hardcore whats needed for jitting" This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a. * Revert "remove restriction to run conv_norm in fp32" This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9. * revert using baddmm in attention's forward * cleanup comment * remove restriction to run conv_norm in fp32. no quality loss was noticed This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213. * add more optimizations techniques to docs * Revert "add shapes comments" This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4. * apply suggestions * make quality * apply suggestions * styling * `scheduler.timesteps` are now arrays so we dont need .to() * remove useless .type() * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms` * move scheduler timestamps to correct device if tensors * add device to `set_timesteps` in LMSD scheduler * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it * quick fix * styling * remove kwargs from schedulers `set_timesteps` * revert to using max in K-LMS inpaint pipeline test * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it" This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899. * move timesteps to correct device before loop in SD pipeline * apply previous fix to other SD pipelines * UNet now accepts tensor timesteps even on wrong device, to avoid errors - it shouldnt affect performance if timesteps are alrdy on correct device - it does slow down performance if they're on the wrong device * fix pipeline when timesteps are arrays with strides
-
- 29 Sep, 2022 3 commits
-
-
Partho authored
renamed x to hidden_states
-
V Vishnu Anirudh authored
* correcting the beta value assignment * updating DDIM and LMSDiscreteFlax schedulers * bringing back the changes that were lost as part of main branch merge
-
Pedro Cuenca authored
Flax from_pretrained: clean up `mismatched_keys`. Originally removed in 73e0bc692c5761e55faff39c80a26d7a3cfc748c.
-
- 28 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* Fix the LMS pytorch regression * Copy over the changes from #637 * Copy over the changes from #637 * Fix betas test
-
- 27 Sep, 2022 10 commits
-
-
Pedro Cuenca authored
* Replace deprecation warning f-string with class name. When `__repr__` is invoked in the instance serialization of `config_dict` fails, because it contains `kwargs` of type `<class inspect._empty>`. * Revert "Replace deprecation warning f-string with class name." This reverts commit 1c4eb8cb104374bd84e43865fc3865862473799c. * Do not attempt to register `"kwargs"` as an attribute. Otherwise serialization could fail. This may happen for other attributes, so we should create a better solution.
-
Anton Lozhkov authored
fix np onnx
-
Kashif Rasul authored
* add dep. warning for schedulers * fix format
-
Suraj Patil authored
fix add noise
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Yih-Dar authored
* Fix SpatialTransformer * Fix SpatialTransformer Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Pedro Cuenca authored
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline * todo comment * Fix imports * Fix imports * add dummies * Fix empty init * make pipeline work * up * Allow dtype to be overridden on model load. This may be a temporary solution until #567 is addressed. * Convert params to bfloat16 or fp16 after loading. This deals with the weights, not the model. * Use Flax schedulers (typing, docstring) * PNDM: replace control flow with jax functions. Otherwise jitting/parallelization don't work properly as they don't know how to deal with traced objects. I temporarily removed `step_prk`. * Pass latents shape to scheduler set_timesteps() PNDMScheduler uses it to reserve space, other schedulers will just ignore it. * Wrap model imports inside availability checks. * Optionally return state in from_config. Useful for Flax schedulers. * Do not convert model weights to dtype. * Re-enable PRK steps with functional implementation. Values returned still not verified for correctness. * Remove left over has_state var. * make style * Apply suggestion list -> tuple Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestion list -> tuple Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Remove unused comments. * Use zeros instead of empty. Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Pedro Cuenca authored
-
Pedro Cuenca authored
* Remove deprecated `torch_device` kwarg. * Remove unused imports.
-
Yuta Hayashibe authored
* Return encoded texts by DiffusionPipelines * Updated README to show hot to use enoded_text_input * Reverted examples in README.md * Reverted all * Warning for long prompts * Fix bugs * Formatted
-
- 24 Sep, 2022 2 commits
-
-
Grigory Sizov authored
fix formula for noise levels in karras scheduler and tests
-
Ryan Russell authored
* docs: `src/diffusers` readability improvements Signed-off-by:
Ryan Russell <git@ryanrussell.org> * docs: `make style` lint Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
- 23 Sep, 2022 6 commits
-
-
Pedro Cuenca authored
Fix "ort is not defined" issue.
-
cloudhan authored
-
Ryan Russell authored
* refactor: pipelines readability improvements Signed-off-by:
Ryan Russell <git@ryanrussell.org> * docs: remove todo comment from flax pipeline Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Abdullah Alfaraj authored
the result of running the pipeline is stored in StableDiffusionPipelineOutput.images
-
Younes Belkada authored
* documenting `attention_flax.py` file * documenting `embeddings_flax.py` * documenting `unet_blocks_flax.py` * Add new objs to doc page * document `vae_flax.py` * Apply suggestions from code review * modify `unet_2d_condition_flax.py` * make style * Apply suggestions from code review * make style * Apply suggestions from code review * fix indent * fix typo * fix indent unet * Update src/diffusers/models/vae_flax.py * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Ryan Russell authored
Signed-off-by:Ryan Russell <git@ryanrussell.org>
-
- 22 Sep, 2022 4 commits
-
-
Jonathan Whitaker authored
* Adding pred_original_sample to SchedulerOutput of DDPMScheduler, DDIMScheduler, LMSDiscreteScheduler, KarrasVeScheduler step methods so we can access the predicted denoised outputs * Gave DDPMScheduler, DDIMScheduler and LMSDiscreteScheduler their own output dataclasses so the default SchedulerOutput in scheduling_utils does not need pred_original_sample as an optional extra * Reordered library imports to follow standard * didnt get import order quite right apparently * Forgot to change name of LMSDiscreteSchedulerOutput * Aha, needed some extra libs for make style to fully work
-
Suraj Patil authored
* add grad ckpt to downsample blocks * make it work * don't pass gradient_checkpointing to upsample block * add tests for UNet2DConditionModel * add test_gradient_checkpointing * add gradient_checkpointing for up and down blocks * add functions to enable and disable grad ckpt * remove the forward argument * better naming * make supports_gradient_checkpointing private
-
Mishig Davaadorj authored
-
Mishig Davaadorj authored
-
- 21 Sep, 2022 3 commits
-
-
Anton Lozhkov authored
* Handle the PIL.Image.Resampling deprecation * style
-
Ryan Russell authored
Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Anton Lozhkov authored
-