- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 16 Jan, 2023 1 commit
-
-
Patrick von Platen authored
-
- 04 Jan, 2023 2 commits
-
-
Alex Redden authored
Fix default lr-scaling cli argument
-
Yasyf Mohamedali authored
* Support training SD V2 with Flax Mostly involves supporting a v_prediction scheduler. The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead. * Add to other top-level files.
-
- 03 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Deterministic torch randn] Allow tensors to be generated on CPU * fix more * up * fix more * up * Update src/diffusers/utils/torch_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Apply suggestions from code review * up * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 02 Jan, 2023 2 commits
-
-
Pedro Cuenca authored
Fixes to the help for report_to in training scripts.
-
Suraj Patil authored
* misc fixes * more comments * Update examples/textual_inversion/textual_inversion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * set transformers verbosity to warning Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 30 Dec, 2022 1 commit
-
-
Suraj Patil authored
update loss computation
-
- 29 Dec, 2022 1 commit
-
-
Suraj Patil authored
Co-authored-by:Henrik Forstén <henrik.forsten@gmail.com> * update TI script * make flake happy * fix typo
-
- 27 Dec, 2022 1 commit
-
-
Katsuya authored
* Make xformers optional even if it is available * Raise exception if xformers is used but not available * Rename use_xformers to enable_xformers_memory_efficient_attention * Add a note about xformers in README * Reformat code style
-
- 23 Dec, 2022 2 commits
-
-
Suraj Patil authored
* unwrap_model text encoder before accessing weights * fix another call * fix the right call
-
Patrick von Platen authored
* Remove hardcoded names from PT scripts * Apply suggestions from code review Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
- 20 Dec, 2022 1 commit
-
-
Simon Kirsten authored
* [Flax] Stateless schedulers, fixes and refactors * Remove scheduling_common_flax and some renames * Update src/diffusers/schedulers/scheduling_pndm_flax.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 15 Dec, 2022 2 commits
-
-
jiqing-feng authored
* add conf.yaml * enable bf16 enable amp bf16 for unet forward fix style fix readme remove useless file * change amp to full bf16 * align * make stype * fix format
-
Pedro Cuenca authored
* Add state checkpointing to other training scripts * Fix first_epoch * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update Dreambooth checkpoint help message. * Dreambooth docs: checkpoints, inference from a checkpoint. * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 12 Dec, 2022 1 commit
-
-
Patrick von Platen authored
-
- 10 Dec, 2022 1 commit
-
-
Pedro Cuenca authored
Remove spurious arg in training scripts.
-
- 09 Dec, 2022 2 commits
-
-
Patrick von Platen authored
* do not automatically enable xformers * uP
-
Haofan Wang authored
* Update requirements.txt * Update requirements_flax.txt * Update requirements.txt * Update requirements_flax.txt * Update requirements.txt * Update requirements_flax.txt
-
- 06 Dec, 2022 1 commit
-
-
Suraj Patil authored
* add check_min_version for examples * move __version__ to the top * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix comment * fix error_message * adapt the install message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Dec, 2022 2 commits
-
-
Suraj Patil authored
us from_pretrained to load scheduler
-
allo- authored
[textual_inversion] Add an option to only save embeddings Add an command line option --only_save_embeds to the example script, for not saving the full model. Then only the learned embeddings are saved, which can be added to the original model at runtime in a similar way as they are created in the training script. Saving the full model is forced when --push_to_hub is used. (Implements #759)
-
- 28 Nov, 2022 1 commit
-
-
Suraj Patil authored
* add get_velocity * add v prediction for training * fix saving * add revision arg * fix saving * save checkpoints dreambooth * fix saving embeds * add instruction in readme * quality * noise_pred -> model_pred
-
- 18 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* [Examples] Correct path * uP
-
- 17 Nov, 2022 1 commit
-
-
Patrick von Platen authored
-
- 16 Nov, 2022 2 commits
-
-
Pedro Cuenca authored
* Temporary local test for PIL_INTERPOLATION * Fix examples too.
-
Patrick von Platen authored
* Better error message for transformers dummy * [PIL] Better deprecation functionality * up
-
- 07 Nov, 2022 1 commit
-
-
Duong A. Nguyen authored
load text encoder from subfolder
-
- 02 Nov, 2022 1 commit
-
-
Suraj Patil authored
Update README.md
-
- 31 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 27 Oct, 2022 1 commit
-
-
Suraj Patil authored
-
- 26 Oct, 2022 1 commit
-
-
Duong A. Nguyen authored
* add textual inversion flax * make style * make style * replicate vae and unet params * make style * minor * save after end of training * style * Temporary fix Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Add Flax instruction Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 24 Oct, 2022 1 commit
-
-
apolinario authored
* Update README.md Additionally add FLAX so the model card can be slimmer and point to this page * Find and replace all * v-1-5 -> v1-5 * revert test changes * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Revert certain references to v1-5 * Docs changes * Apply suggestions from code review Co-authored-by:
apolinario <joaopaulo.passos+multimodal@gmail.com> Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 07 Oct, 2022 1 commit
-
-
YaYaB authored
* Fix push_to_hub for dreambooth and textual_inversion * Use repo.push_to_hub instead of push_to_hub
-
- 05 Oct, 2022 2 commits
-
-
Patrick von Platen authored
up
-
Suraj Patil authored
remove use_auth_token
-
- 29 Sep, 2022 1 commit
-
-
Suraj Patil authored
update transfomrers version in example
-
- 28 Sep, 2022 1 commit
-
-
Isamu Isozaki authored
* Added script to save during training * Suggested changes
-
- 27 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Sep, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-