"vscode:/vscode.git/clone" did not exist on "cff002b82447a5bed197be1a39ca3e338cd6aa19"
- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 16 Jan, 2023 2 commits
-
-
Pedro Cuenca authored
Fix a couple typos in Dreambooth readme.
-
Sayak Paul authored
* Update README.md Co-authored-by:Pedro Cuenca <pedro@huggingface.co>
-
- 27 Dec, 2022 1 commit
-
-
Katsuya authored
* Make xformers optional even if it is available * Raise exception if xformers is used but not available * Rename use_xformers to enable_xformers_memory_efficient_attention * Add a note about xformers in README * Reformat code style
-
- 20 Dec, 2022 1 commit
-
-
Pedro Cuenca authored
* Section header for in-painting, inference from checkpoint. * Inference: link to section to perform inference from checkpoint. * Move Dreambooth in-painting instructions to the proper place.
-
- 18 Dec, 2022 1 commit
-
-
Patrick von Platen authored
-
- 06 Dec, 2022 2 commits
-
-
Suraj Patil authored
* add check_min_version for examples * move __version__ to the top * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix comment * fix error_message * adapt the install message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Will Berman authored
-
- 05 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* Research folder * Update examples/research_projects/README.md * up
-
- 02 Dec, 2022 3 commits
-
-
Adalberto authored
* Create train_dreambooth_inpaint.py train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training * Update train_dreambooth_inpaint.py refactored train_dreambooth_inpaint with black * Update train_dreambooth_inpaint.py * Update train_dreambooth_inpaint.py * Update train_dreambooth_inpaint.py Fix prior preservation * add instructions to readme, fix SD2 compatibility
-
Will Berman authored
-
Will Berman authored
-
- 30 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* [Dreambooth] Make compatible with alt diffusion * make style * add example
-
- 28 Nov, 2022 1 commit
-
-
Suraj Patil authored
* add get_velocity * add v prediction for training * fix saving * add revision arg * fix saving * save checkpoints dreambooth * fix saving embeds * add instruction in readme * quality * noise_pred -> model_pred
-
- 22 Nov, 2022 1 commit
-
-
Suraj Patil authored
* use accelerator to check mixed_precision * default `mixed_precision` to `None` * pass mixed_precision to accelerate launch
-
- 15 Nov, 2022 1 commit
-
-
Glenn 'devalias' Grant authored
* add 'conda install cudatoolkit' to dreambooth 'training on 16GB' example fixes https://github.com/huggingface/diffusers/issues/1207 * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 02 Nov, 2022 1 commit
-
-
Jonathan Rahn authored
Update README.md fixed typo
-
- 27 Oct, 2022 2 commits
-
-
Suraj Patil authored
-
Duong A. Nguyen authored
* [Flax] Add DreamBooth * fix sample rng * style * not reuse rng * add dtype for mixed precision training * Add Flax example
-
- 20 Oct, 2022 2 commits
-
-
Suraj Patil authored
dont' use safety check when generating prior images
-
Hanusz Leszek authored
Add --sample_batch_size=1 to the 8 GB dreambooth script
-
- 18 Oct, 2022 1 commit
-
-
Suraj Patil authored
* allow fine-tuning text encoder * fix a few things * update readme
-
- 14 Oct, 2022 1 commit
-
-
Omar Sanseviero authored
-
- 10 Oct, 2022 1 commit
-
-
Henrik Forstén authored
* Support deepspeed * Dreambooth DeepSpeed documentation * Remove unnecessary casts, documentation Due to recent commits some casts to half precision are not necessary anymore. Mention that DeepSpeed's version of Adam is about 2x faster. * Review comments
-
- 05 Oct, 2022 2 commits
-
-
Patrick von Platen authored
up
-
Suraj Patil authored
remove use_auth_token
-
- 04 Oct, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Update examples/dreambooth/train_dreambooth.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 29 Sep, 2022 1 commit
-
-
Suraj Patil authored
update transfomrers version in example
-
- 27 Sep, 2022 2 commits
-
-
Suraj Patil authored
update install section
-
Zhenhuan Liu authored
* Add training example for DreamBooth. * Fix bugs. * Update readme and default hyperparameters. * Reformatting code with black. * Update for multi-gpu trianing. * Apply suggestions from code review * improgve sampling * fix autocast * improve sampling more * fix saving * actuallu fix saving * fix saving * improve dataset * fix collate fun * fix collate_fn * fix collate fn * fix key name * fix dataset * fix collate fn * concat batch in collate fn * add grad ckpt * add option for 8bit adam * do two forward passes for prior preservation * Revert "do two forward passes for prior preservation" This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95. * add option for prior_loss_weight * add option for clip grad norm * add more comments * update readme * update readme * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add docstr for dataset * update the saving logic * Update examples/dreambooth/README.md * remove unused imports Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-