- 17 Feb, 2023 4 commits
-
-
Patrick von Platen authored
-
Will Berman authored
* add xformers 0.0.16 warning message * fix version check to check whole version string
-
Will Berman authored
This reverts commit 024c4376.
-
Patrick von Platen authored
-
- 16 Feb, 2023 2 commits
-
-
Will Berman authored
-
Will Berman authored
* add total number checkpoints to training scripts * Update examples/dreambooth/train_dreambooth.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Feb, 2023 1 commit
-
-
Isamu Isozaki authored
* Quality check and adding tokenizer * Adapted stable diffusion to mixed precision+finished up style fixes * Fixed based on patrick's review * Fixed oom from number of validation images * Removed unnecessary np.array conversion --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 07 Feb, 2023 4 commits
-
-
Patrick von Platen authored
* before running make style * remove left overs from flake8 * finish * make fix-copies * final fix * more fixes
-
Patrick von Platen authored
* [Examples] Remove datasets important that is not needed * remove from lora tambien
-
Patrick von Platen authored
-
chavinlo authored
* Create convert_vae_pt_to_diffusers.py Just a simple script to convert VAE.pt files to diffusers format Tested with: https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/VAEs/orangemix.vae.pt * Update convert_vae_pt_to_diffusers.py Forgot to add the function call * make style --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
chavinlo <example@example.com>
-
- 03 Feb, 2023 1 commit
-
-
Isamu Isozaki authored
-
- 27 Jan, 2023 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 26 Jan, 2023 2 commits
-
-
Suraj Patil authored
* make scaling factor cnfig arg of vae * fix * make flake happy * fix ldm * fix upscaler * qualirty * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * solve conflicts, addres some comments * examples * examples min version * doc * fix type * typo * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove duplicate line * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* [textual inversion] Allow validation images. * Change key to `validation` * Specify format instead of transposing. As discussed with @sayakpaul. * Style Co-authored-by:isamu-isozaki <isamu.website@gmail.com>
-
- 25 Jan, 2023 3 commits
-
-
Patrick von Platen authored
* [Bump version] 0.13 * Bump model up * up
-
Patrick von Platen authored
-
patil-suraj authored
-
- 24 Jan, 2023 1 commit
-
-
Pedro Cuenca authored
* Fix resuming state when using gradient checkpointing. Also, allow --resume_from_checkpoint to be used when the checkpoint does not yet exist (a normal training run will be started). * style
-
- 23 Jan, 2023 1 commit
-
-
Suraj Patil authored
add --dataloader_num_workers argument
-
- 20 Jan, 2023 1 commit
-
-
Lucain authored
* Create repo before cloning in examples * code quality
-
- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 16 Jan, 2023 1 commit
-
-
Patrick von Platen authored
-
- 04 Jan, 2023 1 commit
-
-
Alex Redden authored
Fix default lr-scaling cli argument
-
- 03 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Deterministic torch randn] Allow tensors to be generated on CPU * fix more * up * fix more * up * Update src/diffusers/utils/torch_utils.py Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Apply suggestions from code review * up * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 02 Jan, 2023 2 commits
-
-
Pedro Cuenca authored
Fixes to the help for report_to in training scripts.
-
Suraj Patil authored
* misc fixes * more comments * Update examples/textual_inversion/textual_inversion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * set transformers verbosity to warning Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 30 Dec, 2022 1 commit
-
-
Suraj Patil authored
update loss computation
-
- 29 Dec, 2022 1 commit
-
-
Suraj Patil authored
Co-authored-by:Henrik Forstén <henrik.forsten@gmail.com> * update TI script * make flake happy * fix typo
-
- 27 Dec, 2022 1 commit
-
-
Katsuya authored
* Make xformers optional even if it is available * Raise exception if xformers is used but not available * Rename use_xformers to enable_xformers_memory_efficient_attention * Add a note about xformers in README * Reformat code style
-
- 23 Dec, 2022 2 commits
-
-
Suraj Patil authored
* unwrap_model text encoder before accessing weights * fix another call * fix the right call
-
Patrick von Platen authored
* Remove hardcoded names from PT scripts * Apply suggestions from code review Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
- 15 Dec, 2022 2 commits
-
-
jiqing-feng authored
* add conf.yaml * enable bf16 enable amp bf16 for unet forward fix style fix readme remove useless file * change amp to full bf16 * align * make stype * fix format
-
Pedro Cuenca authored
* Add state checkpointing to other training scripts * Fix first_epoch * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update Dreambooth checkpoint help message. * Dreambooth docs: checkpoints, inference from a checkpoint. * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 12 Dec, 2022 1 commit
-
-
Patrick von Platen authored
-
- 10 Dec, 2022 1 commit
-
-
Pedro Cuenca authored
Remove spurious arg in training scripts.
-
- 09 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* do not automatically enable xformers * uP
-
- 06 Dec, 2022 1 commit
-
-
Suraj Patil authored
* add check_min_version for examples * move __version__ to the top * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix comment * fix error_message * adapt the install message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Dec, 2022 1 commit
-
-
Suraj Patil authored
us from_pretrained to load scheduler
-