- 04 Oct, 2022 4 commits
-
-
Pi Esposito authored
* add accelerate to load models with smaller memory footprint * remove low_cpu_mem_usage as it is reduntant * move accelerate init weights context to modelling utils * add test to ensure results are the same when loading with accelerate * add tests to ensure ram usage gets lower when using accelerate * move accelerate logic to single snippet under modelling utils and remove it from configuration utils * format code using to pass quality check * fix imports with isor * add accelerate to test extra deps * only import accelerate if device_map is set to auto * move accelerate availability check to diffusers import utils * format code Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Tanishq Abraham authored
* Update links in schedulers README.md * Update src/diffusers/schedulers/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Kashif Rasul authored
fix docstring fixes #709
-
Josh Achiam authored
* Conversion script * ran black * ran isort * remove unused import * map location so everything gets loaded onto CPU before conversion * ran black again * Update setup.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 03 Oct, 2022 8 commits
-
-
Patrick von Platen authored
* [Utils] Add deprecate function * up * up * uP * up * up * up * up * uP * up * fix * up * move to deprecation utils file * fix * fix * fix more
-
Anton Lozhkov authored
* [CI] Localize the HF cache * pip cache * de-env * refactor matrix * fix fast cache * less onnx steps * revert * revert pip cache * revert pip cache * remove debugging trigger
-
Patrick von Platen authored
-
Pedro Cuenca authored
* Don't use `load_state_dict` if torch is not installed. * Define `SchedulerOutput` to use torch or flax arrays. * Don't import LMSDiscreteScheduler without torch. * Create distinct FlaxSchedulerOutput. * Additional changes required for FlaxSchedulerMixin * Do not import torch pipelines in Flax. * Revert "Define `SchedulerOutput` to use torch or flax arrays." This reverts commit f653140134b74d9ffec46d970eb46925fe3a409d. * Prefix Flax scheduler outputs for consistency. * make style * FlaxSchedulerOutput is now a dataclass. * Don't use f-string without placeholders. * Add blank line. * Style (docstrings)
-
Krishna Penukonda authored
Fixed type annotations on StableDiffusionPipeline::__call__
-
Pedro Cuenca authored
* Flax: add shape argument to set_timesteps * style
-
Patrick von Platen authored
-
Suraj Patil authored
fix applying clip grad norm
-
- 02 Oct, 2022 1 commit
-
-
James R T authored
* Add callback parameters for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Lint code with `black --preview` Signed-off-by:
James R T <jamestiotio@gmail.com> * Refactor callback implementation for Stable Diffusion pipelines * Fix missing imports Signed-off-by:
James R T <jamestiotio@gmail.com> * Fix documentation format Signed-off-by:
James R T <jamestiotio@gmail.com> * Add kwargs parameter to standardize with other pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Modify Stable Diffusion pipeline callback parameters Signed-off-by:
James R T <jamestiotio@gmail.com> * Remove useless imports Signed-off-by:
James R T <jamestiotio@gmail.com> * Change types for timestep and onnx latents * Fix docstring style * Return decode_latents and run_safety_checker back into __call__ * Remove unused imports * Add intermediate state tests for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Fix intermediate state tests for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> Signed-off-by:
James R T <jamestiotio@gmail.com>
-
- 01 Oct, 2022 1 commit
-
-
Omar Sanseviero authored
* Fix BibText citation * Update README.md
-
- 30 Sep, 2022 7 commits
-
-
Nouamane Tazi authored
* revert using baddbmm in attention - to fix `test_stable_diffusion_memory_chunking` test * styling
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Nouamane Tazi authored
-
Ryan Russell authored
refactor: update ldm-bert `config.json` url Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
Josh Achiam authored
* Allow resolutions that are not multiples of 64 * ran black * fix bug * add test * more explanation * more comments Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Nouamane Tazi authored
* initial commit * make UNet stream capturable * try to fix noise_pred value * remove cuda graph and keep NB * non blocking unet with PNDMScheduler * make timesteps np arrays for pndm scheduler because lists don't get formatted to tensors in `self.set_format` * make max async in pndm * use channel last format in unet * avoid moving timesteps device in each unet call * avoid memcpy op in `get_timestep_embedding` * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained` * update TODO * replace `channels_last` kwarg with `memory_format` for more generality * revert the channels_last changes to leave it for another PR * remove non_blocking when moving input ids to device * remove blocking from all .to() operations at beginning of pipeline * fix merging * fix merging * model can run in other precisions without autocast * attn refactoring * Revert "attn refactoring" This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb. * remove restriction to run conv_norm in fp32 * use `baddbmm` instead of `matmul`for better in attention for better perf * removing all reshapes to test perf * Revert "removing all reshapes to test perf" This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd. * add shapes comments * hardcore whats needed for jitting * Revert "hardcore whats needed for jitting" This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a. * Revert "remove restriction to run conv_norm in fp32" This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9. * revert using baddmm in attention's forward * cleanup comment * remove restriction to run conv_norm in fp32. no quality loss was noticed This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213. * add more optimizations techniques to docs * Revert "add shapes comments" This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4. * apply suggestions * make quality * apply suggestions * styling * `scheduler.timesteps` are now arrays so we dont need .to() * remove useless .type() * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms` * move scheduler timestamps to correct device if tensors * add device to `set_timesteps` in LMSD scheduler * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it * quick fix * styling * remove kwargs from schedulers `set_timesteps` * revert to using max in K-LMS inpaint pipeline test * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it" This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899. * move timesteps to correct device before loop in SD pipeline * apply previous fix to other SD pipelines * UNet now accepts tensor timesteps even on wrong device, to avoid errors - it shouldnt affect performance if timesteps are alrdy on correct device - it does slow down performance if they're on the wrong device * fix pipeline when timesteps are arrays with strides
-
- 29 Sep, 2022 6 commits
-
-
Partho authored
renamed x to hidden_states
-
V Vishnu Anirudh authored
* correcting the beta value assignment * updating DDIM and LMSDiscreteFlax schedulers * bringing back the changes that were lost as part of main branch merge
-
Pedro Cuenca authored
Flax from_pretrained: clean up `mismatched_keys`. Originally removed in 73e0bc692c5761e55faff39c80a26d7a3cfc748c.
-
Suraj Patil authored
* lowe tolerance * put model in eval mode
-
Suraj Patil authored
update transfomrers version in example
-
Tanishq Abraham authored
-
- 28 Sep, 2022 3 commits
-
-
Suraj Patil authored
take the correct text embeddings
-
Isamu Isozaki authored
* Added script to save during training * Suggested changes
-
Anton Lozhkov authored
* Fix the LMS pytorch regression * Copy over the changes from #637 * Copy over the changes from #637 * Fix betas test
-
- 27 Sep, 2022 10 commits
-
-
Pedro Cuenca authored
* Replace deprecation warning f-string with class name. When `__repr__` is invoked in the instance serialization of `config_dict` fails, because it contains `kwargs` of type `<class inspect._empty>`. * Revert "Replace deprecation warning f-string with class name." This reverts commit 1c4eb8cb104374bd84e43865fc3865862473799c. * Do not attempt to register `"kwargs"` as an attribute. Otherwise serialization could fail. This may happen for other attributes, so we should create a better solution.
-
Anton Lozhkov authored
fix np onnx
-
Suraj Patil authored
remove set_format from pipeline
-
Kashif Rasul authored
* add dep. warning for schedulers * fix format
-
Suraj Patil authored
fix add noise
-
Suraj Patil authored
update install section
-
Suraj Patil authored
don't pass tensor_format
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Zhenhuan Liu authored
* Add training example for DreamBooth. * Fix bugs. * Update readme and default hyperparameters. * Reformatting code with black. * Update for multi-gpu trianing. * Apply suggestions from code review * improgve sampling * fix autocast * improve sampling more * fix saving * actuallu fix saving * fix saving * improve dataset * fix collate fun * fix collate_fn * fix collate fn * fix key name * fix dataset * fix collate fn * concat batch in collate fn * add grad ckpt * add option for 8bit adam * do two forward passes for prior preservation * Revert "do two forward passes for prior preservation" This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95. * add option for prior_loss_weight * add option for clip grad norm * add more comments * update readme * update readme * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add docstr for dataset * update the saving logic * Update examples/dreambooth/README.md * remove unused imports Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Yih-Dar authored
* Fix SpatialTransformer * Fix SpatialTransformer Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-