- 30 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* move files a bit * more refactors * fix more * more fixes * fix more onnx * make style * upload * fix * up * fix more * up again * up * small fix * Update src/diffusers/__init__.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * correct Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 29 Dec, 2022 1 commit
-
-
Suraj Patil authored
Co-authored-by:Henrik Forstén <henrik.forsten@gmail.com> * update TI script * make flake happy * fix typo
-
- 28 Dec, 2022 1 commit
-
-
Partho authored
* initial * type hints * update scheduler type hint * add to README * add example generation to README * v -> mix_factor * load scheduler from pretrained
-
- 27 Dec, 2022 3 commits
-
-
kabachuha authored
* allow selecting precision to make DB class images addresses #1831 * add prior_generation_precision argument * correct prior_generation_precision's description Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Katsuya authored
* Make xformers optional even if it is available * Raise exception if xformers is used but not available * Rename use_xformers to enable_xformers_memory_efficient_attention * Add a note about xformers in README * Reformat code style
-
Christopher Friesen authored
-
- 23 Dec, 2022 2 commits
-
-
Suraj Patil authored
* unwrap_model text encoder before accessing weights * fix another call * fix the right call
-
Patrick von Platen authored
* Remove hardcoded names from PT scripts * Apply suggestions from code review Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
- 22 Dec, 2022 1 commit
-
-
Prathik Rao authored
* reorder model wrap * bug fix Co-authored-by:Prathik Rao <prathikrao@microsoft.com>
-
- 20 Dec, 2022 3 commits
-
-
Pedro Cuenca authored
* Section header for in-painting, inference from checkpoint. * Inference: link to section to perform inference from checkpoint. * Move Dreambooth in-painting instructions to the proper place.
-
Simon Kirsten authored
* [Flax] Stateless schedulers, fixes and refactors * Remove scheduling_common_flax and some renames * Update src/diffusers/schedulers/scheduling_pndm_flax.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Emil Bogomolov authored
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py * fix formatting * fix style * Update src/diffusers/optimization.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 19 Dec, 2022 6 commits
-
-
Prathik Rao authored
* reflect changes * run make style Co-authored-by:Prathik Rao <prathikrao@microsoft.com> Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
-
Pedro Cuenca authored
* Fail if there are less images than the effective batch size. * Remove lr-scheduler arg as it's currently ignored. * Make guidance_scale work for batch_size > 1.
-
Anton Lozhkov authored
-
Nan Liu authored
* update composable diffusion for an updated diffuser library * fix style/quality for code * Revert "fix style/quality for code" This reverts commit 71f23497639fe69de00d93cf91edc31b08dcd7a4. * update style * reduce memory usage by computing score sequentially
-
Anish Shah authored
Update train_unconditional.py Add logger flag to choose between tensorboard and wandb
-
Patrick von Platen authored
-
- 18 Dec, 2022 1 commit
-
-
Patrick von Platen authored
-
- 15 Dec, 2022 3 commits
-
-
Haihao Shen authored
* Add examples with Intel optimizations (BF16 fine-tuning and inference) * Remove unused package * Add README for intel_opts and refine the description for research projects * Add notes of intel opts for diffusers
-
jiqing-feng authored
* add conf.yaml * enable bf16 enable amp bf16 for unet forward fix style fix readme remove useless file * change amp to full bf16 * align * make stype * fix format
-
Pedro Cuenca authored
* Add state checkpointing to other training scripts * Fix first_epoch * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update Dreambooth checkpoint help message. * Dreambooth docs: checkpoints, inference from a checkpoint. * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 14 Dec, 2022 1 commit
-
-
Prathik Rao authored
* manually update train_unconditional_ort * formatting Co-authored-by:Prathik Rao <prathikrao@microsoft.com>
-
- 13 Dec, 2022 5 commits
-
-
Pedro Cuenca authored
Use warnings instead of logger in parse_args() logger requires an `Accelerator`.
-
Patrick von Platen authored
-
Patrick von Platen authored
Change the one-step dummy pipeline for testing
-
Pedro Cuenca authored
* Dreambooth: save / restore training state. * make style * Rename vars for clarity. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Remove unused import Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Suvaditya Mukherjee authored
* Added Community pipeline for comparing Stable Diffusion v1.1-4 Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com> * Made changes to provide support for current iteration of from_pretrained and added example Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com> * updated a small spelling error Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com> * added pipeline entry to table Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com> Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com>
-
- 12 Dec, 2022 2 commits
-
-
Patrick von Platen authored
-
Prathik Rao authored
bug fix Co-authored-by:
Prathik Rao <prathikrao@microsoft.com> Co-authored-by:
anton- <anton@huggingface.co>
-
- 10 Dec, 2022 2 commits
-
-
Tim Hinderliter authored
dreambooth: fix #1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16 (#1618) * dreambooth: fix #1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16 * dreambooth: guard against passing keep_fp32_wrapper arg to older versions of accelerate. part of fix for #1566 * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/dreambooth/train_dreambooth.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Pedro Cuenca authored
Remove spurious arg in training scripts.
-
- 09 Dec, 2022 3 commits
-
-
Patrick von Platen authored
* do not automatically enable xformers * uP
-
Haofan Wang authored
* Update requirements.txt * Update requirements_flax.txt * Update requirements.txt * Update requirements_flax.txt * Update requirements.txt * Update requirements_flax.txt
-
SkyTNT authored
-
- 08 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* uP * uP
-
- 07 Dec, 2022 3 commits
-
-
Ben Sherman authored
easy fix for undefined name in train_dreambooth.py import_model_class_from_model_name_or_path loads a pretrained model and refers to args.revision in a context where args is undefined. I modified the function to take revision as an argument and modified the invocation of the function to pass in the revision from args. Seems like this was caused by a cut and paste.
-
Nathan Lambert authored
* init docs update * style * fix bad colab formatting, add pipeline comment * update todo
-
SkyTNT authored
* fix lpw_stable_diffusion * rollback preprocess_mask resample
-
- 06 Dec, 2022 1 commit
-
-
Suraj Patil authored
make collate_fn global
-