"vscode:/vscode.git/clone" did not exist on "8c1db721135e61d4a3dfbc4e2bbe05cd50cfded1"
- 30 Sep, 2022 2 commits
-
-
Josh Achiam authored
* Allow resolutions that are not multiples of 64 * ran black * fix bug * add test * more explanation * more comments Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nouamane Tazi authored
* initial commit * make UNet stream capturable * try to fix noise_pred value * remove cuda graph and keep NB * non blocking unet with PNDMScheduler * make timesteps np arrays for pndm scheduler because lists don't get formatted to tensors in `self.set_format` * make max async in pndm * use channel last format in unet * avoid moving timesteps device in each unet call * avoid memcpy op in `get_timestep_embedding` * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained` * update TODO * replace `channels_last` kwarg with `memory_format` for more generality * revert the channels_last changes to leave it for another PR * remove non_blocking when moving input ids to device * remove blocking from all .to() operations at beginning of pipeline * fix merging * fix merging * model can run in other precisions without autocast * attn refactoring * Revert "attn refactoring" This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb. * remove restriction to run conv_norm in fp32 * use `baddbmm` instead of `matmul`for better in attention for better perf * removing all reshapes to test perf * Revert "removing all reshapes to test perf" This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd. * add shapes comments * hardcore whats needed for jitting * Revert "hardcore whats needed for jitting" This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a. * Revert "remove restriction to run conv_norm in fp32" This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9. * revert using baddmm in attention's forward * cleanup comment * remove restriction to run conv_norm in fp32. no quality loss was noticed This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213. * add more optimizations techniques to docs * Revert "add shapes comments" This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4. * apply suggestions * make quality * apply suggestions * styling * `scheduler.timesteps` are now arrays so we dont need .to() * remove useless .type() * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms` * move scheduler timestamps to correct device if tensors * add device to `set_timesteps` in LMSD scheduler * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it * quick fix * styling * remove kwargs from schedulers `set_timesteps` * revert to using max in K-LMS inpaint pipeline test * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it" This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899. * move timesteps to correct device before loop in SD pipeline * apply previous fix to other SD pipelines * UNet now accepts tensor timesteps even on wrong device, to avoid errors - it shouldnt affect performance if timesteps are alrdy on correct device - it does slow down performance if they're on the wrong device * fix pipeline when timesteps are arrays with strides
-
- 29 Sep, 2022 6 commits
-
-
Partho authored
renamed x to hidden_states
-
V Vishnu Anirudh authored
* correcting the beta value assignment * updating DDIM and LMSDiscreteFlax schedulers * bringing back the changes that were lost as part of main branch merge
-
Pedro Cuenca authored
Flax from_pretrained: clean up `mismatched_keys`. Originally removed in 73e0bc692c5761e55faff39c80a26d7a3cfc748c.
-
Suraj Patil authored
* lowe tolerance * put model in eval mode
-
Suraj Patil authored
update transfomrers version in example
-
Tanishq Abraham authored
-
- 28 Sep, 2022 3 commits
-
-
Suraj Patil authored
take the correct text embeddings
-
Isamu Isozaki authored
* Added script to save during training * Suggested changes
-
Anton Lozhkov authored
* Fix the LMS pytorch regression * Copy over the changes from #637 * Copy over the changes from #637 * Fix betas test
-
- 27 Sep, 2022 16 commits
-
-
Pedro Cuenca authored
* Replace deprecation warning f-string with class name. When `__repr__` is invoked in the instance serialization of `config_dict` fails, because it contains `kwargs` of type `<class inspect._empty>`. * Revert "Replace deprecation warning f-string with class name." This reverts commit 1c4eb8cb104374bd84e43865fc3865862473799c. * Do not attempt to register `"kwargs"` as an attribute. Otherwise serialization could fail. This may happen for other attributes, so we should create a better solution.
-
Anton Lozhkov authored
fix np onnx
-
Suraj Patil authored
remove set_format from pipeline
-
Kashif Rasul authored
* add dep. warning for schedulers * fix format
-
Suraj Patil authored
fix add noise
-
Suraj Patil authored
update install section
-
Suraj Patil authored
don't pass tensor_format
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Zhenhuan Liu authored
* Add training example for DreamBooth. * Fix bugs. * Update readme and default hyperparameters. * Reformatting code with black. * Update for multi-gpu trianing. * Apply suggestions from code review * improgve sampling * fix autocast * improve sampling more * fix saving * actuallu fix saving * fix saving * improve dataset * fix collate fun * fix collate_fn * fix collate fn * fix key name * fix dataset * fix collate fn * concat batch in collate fn * add grad ckpt * add option for 8bit adam * do two forward passes for prior preservation * Revert "do two forward passes for prior preservation" This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95. * add option for prior_loss_weight * add option for clip grad norm * add more comments * update readme * update readme * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add docstr for dataset * update the saving logic * Update examples/dreambooth/README.md * remove unused imports Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Yih-Dar authored
* Fix SpatialTransformer * Fix SpatialTransformer Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Pedro Cuenca authored
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline * todo comment * Fix imports * Fix imports * add dummies * Fix empty init * make pipeline work * up * Allow dtype to be overridden on model load. This may be a temporary solution until #567 is addressed. * Convert params to bfloat16 or fp16 after loading. This deals with the weights, not the model. * Use Flax schedulers (typing, docstring) * PNDM: replace control flow with jax functions. Otherwise jitting/parallelization don't work properly as they don't know how to deal with traced objects. I temporarily removed `step_prk`. * Pass latents shape to scheduler set_timesteps() PNDMScheduler uses it to reserve space, other schedulers will just ignore it. * Wrap model imports inside availability checks. * Optionally return state in from_config. Useful for Flax schedulers. * Do not convert model weights to dtype. * Re-enable PRK steps with functional implementation. Values returned still not verified for correctness. * Remove left over has_state var. * make style * Apply suggestion list -> tuple Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestion list -> tuple Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Remove unused comments. * Use zeros instead of empty. Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Pedro Cuenca authored
-
Ryan Russell authored
Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Pedro Cuenca authored
* Remove deprecated `torch_device` kwarg. * Remove unused imports.
-
Abdullah Alfaraj authored
the link points to an old location of the train_unconditional.py file
-
Yuta Hayashibe authored
* Return encoded texts by DiffusionPipelines * Updated README to show hot to use enoded_text_input * Reverted examples in README.md * Reverted all * Warning for long prompts * Fix bugs * Formatted
-
- 24 Sep, 2022 3 commits
-
-
Anton Lozhkov authored
-
Grigory Sizov authored
fix formula for noise levels in karras scheduler and tests
-
Ryan Russell authored
* docs: `src/diffusers` readability improvements Signed-off-by:
Ryan Russell <git@ryanrussell.org> * docs: `make style` lint Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
- 23 Sep, 2022 6 commits
-
-
Pedro Cuenca authored
Fix "ort is not defined" issue.
-
cloudhan authored
-
Ryan Russell authored
* refactor: pipelines readability improvements Signed-off-by:
Ryan Russell <git@ryanrussell.org> * docs: remove todo comment from flax pipeline Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Abdullah Alfaraj authored
the result of running the pipeline is stored in StableDiffusionPipelineOutput.images
-
Younes Belkada authored
* documenting `attention_flax.py` file * documenting `embeddings_flax.py` * documenting `unet_blocks_flax.py` * Add new objs to doc page * document `vae_flax.py` * Apply suggestions from code review * modify `unet_2d_condition_flax.py` * make style * Apply suggestions from code review * make style * Apply suggestions from code review * fix indent * fix typo * fix indent unet * Update src/diffusers/models/vae_flax.py * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Mishig Davaadorj <dmishig@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Ryan Russell authored
Signed-off-by:Ryan Russell <git@ryanrussell.org>
-
- 22 Sep, 2022 4 commits
-
-
Jonathan Whitaker authored
* Adding pred_original_sample to SchedulerOutput of DDPMScheduler, DDIMScheduler, LMSDiscreteScheduler, KarrasVeScheduler step methods so we can access the predicted denoised outputs * Gave DDPMScheduler, DDIMScheduler and LMSDiscreteScheduler their own output dataclasses so the default SchedulerOutput in scheduling_utils does not need pred_original_sample as an optional extra * Reordered library imports to follow standard * didnt get import order quite right apparently * Forgot to change name of LMSDiscreteSchedulerOutput * Aha, needed some extra libs for make style to fully work
-
Ryan Russell authored
Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Suraj Patil authored
* add grad ckpt to downsample blocks * make it work * don't pass gradient_checkpointing to upsample block * add tests for UNet2DConditionModel * add test_gradient_checkpointing * add gradient_checkpointing for up and down blocks * add functions to enable and disable grad ckpt * remove the forward argument * better naming * make supports_gradient_checkpointing private
-
Mishig Davaadorj authored
-