- 07 Feb, 2023 1 commit
-
-
Patrick von Platen authored
* before running make style * remove left overs from flake8 * finish * make fix-copies * final fix * more fixes
-
- 27 Jan, 2023 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 26 Jan, 2023 1 commit
-
-
Suraj Patil authored
* make scaling factor cnfig arg of vae * fix * make flake happy * fix ldm * fix upscaler * qualirty * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * solve conflicts, addres some comments * examples * examples min version * doc * fix type * typo * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove duplicate line * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 25 Jan, 2023 3 commits
-
-
Patrick von Platen authored
* [Bump version] 0.13 * Bump model up * up
-
Patrick von Platen authored
-
patil-suraj authored
-
- 20 Jan, 2023 1 commit
-
-
Lucain authored
* Create repo before cloning in examples * code quality
-
- 16 Jan, 2023 1 commit
-
-
Patrick von Platen authored
-
- 04 Jan, 2023 1 commit
-
-
Yasyf Mohamedali authored
* Support training SD V2 with Flax Mostly involves supporting a v_prediction scheduler. The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead. * Add to other top-level files.
-
- 03 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Deterministic torch randn] Allow tensors to be generated on CPU * fix more * up * fix more * up * Update src/diffusers/utils/torch_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Apply suggestions from code review * up * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 20 Dec, 2022 1 commit
-
-
Simon Kirsten authored
* [Flax] Stateless schedulers, fixes and refactors * Remove scheduling_common_flax and some renames * Update src/diffusers/schedulers/scheduling_pndm_flax.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 06 Dec, 2022 1 commit
-
-
Suraj Patil authored
* add check_min_version for examples * move __version__ to the top * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix comment * fix error_message * adapt the install message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 17 Nov, 2022 1 commit
-
-
Patrick von Platen authored
-
- 16 Nov, 2022 2 commits
-
-
Pedro Cuenca authored
* Temporary local test for PIL_INTERPOLATION * Fix examples too.
-
Patrick von Platen authored
* Better error message for transformers dummy * [PIL] Better deprecation functionality * up
-
- 07 Nov, 2022 1 commit
-
-
Duong A. Nguyen authored
load text encoder from subfolder
-
- 26 Oct, 2022 1 commit
-
-
Duong A. Nguyen authored
* add textual inversion flax * make style * make style * replicate vae and unet params * make style * minor * save after end of training * style * Temporary fix Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Add Flax instruction Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 07 Oct, 2022 1 commit
-
-
YaYaB authored
* Fix push_to_hub for dreambooth and textual_inversion * Use repo.push_to_hub instead of push_to_hub
-
- 05 Oct, 2022 1 commit
-
-
Suraj Patil authored
remove use_auth_token
-
- 28 Sep, 2022 1 commit
-
-
Isamu Isozaki authored
* Added script to save during training * Suggested changes
-
- 27 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Sep, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
- 15 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* beta never changes removed from state * fix typos in docs * removed unused var * initial ddim flax scheduler * import * added dummy objects * fix style * fix typo * docs * fix typo in comment * set return type * added flax ddom * fix style * remake * pass PRNG key as argument and split before use * fix doc string * use config * added flax Karras VE scheduler * make style * fix dummy * fix ndarray type annotation * replace returns a new state * added lms_discrete scheduler * use self.config * add_noise needs state * use config * use config * docstring * added flax score sde ve * fix imports * fix typos
-
- 08 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* Update black * update table
-
- 07 Sep, 2022 1 commit
-
-
Suraj Patil authored
fix saving embeds
-
- 05 Sep, 2022 2 commits
-
-
Patrick von Platen authored
* add outputs for models * add for pipelines * finish schedulers * better naming * adapt tests as well * replace dict access with . access * make schedulers works * finish * correct readme * make bcp compatible * up * small fix * finish * more fixes * more fixes * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/vae.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Adapt model outputs * Apply more suggestions * finish examples * correct Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Suraj Patil authored
use add_tokens
-
- 02 Sep, 2022 1 commit
-
-
Suraj Patil authored
* add textual inversion script * make the loop work * make coarse_loss optional * save pipeline after training * add arg pretrained_model_name_or_path * fix saving * fix gradient_accumulation_steps * style * fix progress bar steps * scale lr * add argument to accept style * remove unused args * scale lr using num gpus * load tokenizer using args * add checks when converting init token to id * improve commnets and style * document args * more cleanup * fix default adamw arsg * TextualInversionWrapper -> CLIPTextualInversionWrapper * fix tokenizer loading * Use the CLIPTextModel instead of wrapper * clean dataset * remove commented code * fix accessing grads for multi-gpu * more cleanup * fix saving on multi-GPU * init_placeholder_token_embeds * add seed * fix flip * fix multi-gpu * add utility methods in wrapper * remove ipynb * don't use wrapper * dont pass vae an dunet to accelerate prepare * bring back accelerator.accumulate * scale latents * use only one progress bar for steps * push_to_hub at the end of training * remove unused args * log some important stats * store args in tensorboard * pretty comments * save the trained embeddings * mobe the script up * add requirements file * more cleanup * fux typo * begin readme * style -> learnable_property * keep vae and unet in eval mode * address review comments * address more comments * removed unused args * add train command in readme * update readme
-