- 30 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* [Dreambooth] Make compatible with alt diffusion * make style * add example
-
- 28 Nov, 2022 1 commit
-
-
Suraj Patil authored
* add get_velocity * add v prediction for training * fix saving * add revision arg * fix saving * save checkpoints dreambooth * fix saving embeds * add instruction in readme * quality * noise_pred -> model_pred
-
- 22 Nov, 2022 1 commit
-
-
Suraj Patil authored
* use accelerator to check mixed_precision * default `mixed_precision` to `None` * pass mixed_precision to accelerate launch
-
- 18 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* [Examples] Correct path * uP
-
- 08 Nov, 2022 1 commit
-
-
Yuta Hayashibe authored
* Make errors for invalid options without "--with_prior_preservation" * Make --instance_prompt required * Removed needless check because --instance_data_dir is marked with required * Updated messages * Use logger.warning instead of raise errors Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Nov, 2022 1 commit
-
-
Yuta Hayashibe authored
-
- 31 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 27 Oct, 2022 3 commits
-
-
Suraj Patil authored
-
Duong A. Nguyen authored
Set train mode for text encoder
-
Suraj Patil authored
make input_args optional
-
- 26 Oct, 2022 2 commits
-
-
Brian Whicheloe authored
* Make training code usable by external scripts Add parameter inputs to training and argument parsing function to allow this script to be used by an external call. * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Simon Kirsten authored
-
- 25 Oct, 2022 1 commit
-
-
Yuta Hayashibe authored
* Add --pretrained_model_name_revision option to train_dreambooth.py * Renamed --pretrained_model_name_revision to --revision
-
- 20 Oct, 2022 2 commits
-
-
Hanusz Leszek authored
* Add an underscore to filename if it already exists * Use sha1sum hash instead of adding underscores
-
Suraj Patil authored
dont' use safety check when generating prior images
-
- 18 Oct, 2022 1 commit
-
-
Suraj Patil authored
* allow fine-tuning text encoder * fix a few things * update readme
-
- 13 Oct, 2022 1 commit
-
-
Anton Lozhkov authored
Fix dreambooth loss type with prior preservation
-
- 11 Oct, 2022 1 commit
-
-
spezialspezial authored
-
- 10 Oct, 2022 1 commit
-
-
Henrik Forstén authored
* Support deepspeed * Dreambooth DeepSpeed documentation * Remove unnecessary casts, documentation Due to recent commits some casts to half precision are not necessary anymore. Mention that DeepSpeed's version of Adam is about 2x faster. * Review comments
-
- 07 Oct, 2022 1 commit
-
-
YaYaB authored
* Fix push_to_hub for dreambooth and textual_inversion * Use repo.push_to_hub instead of push_to_hub
-
- 05 Oct, 2022 3 commits
-
-
Patrick von Platen authored
up
-
Suraj Patil authored
remove use_auth_token
-
Pierre LeMoine authored
using already created `Path` in dataset
-
- 04 Oct, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Update examples/dreambooth/train_dreambooth.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 03 Oct, 2022 1 commit
-
-
Suraj Patil authored
fix applying clip grad norm
-
- 27 Sep, 2022 2 commits
-
-
Suraj Patil authored
don't pass tensor_format
-
Zhenhuan Liu authored
* Add training example for DreamBooth. * Fix bugs. * Update readme and default hyperparameters. * Reformatting code with black. * Update for multi-gpu trianing. * Apply suggestions from code review * improgve sampling * fix autocast * improve sampling more * fix saving * actuallu fix saving * fix saving * improve dataset * fix collate fun * fix collate_fn * fix collate fn * fix key name * fix dataset * fix collate fn * concat batch in collate fn * add grad ckpt * add option for 8bit adam * do two forward passes for prior preservation * Revert "do two forward passes for prior preservation" This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95. * add option for prior_loss_weight * add option for clip grad norm * add more comments * update readme * update readme * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add docstr for dataset * update the saving logic * Update examples/dreambooth/README.md * remove unused imports Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Sep, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
- 15 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* beta never changes removed from state * fix typos in docs * removed unused var * initial ddim flax scheduler * import * added dummy objects * fix style * fix typo * docs * fix typo in comment * set return type * added flax ddom * fix style * remake * pass PRNG key as argument and split before use * fix doc string * use config * added flax Karras VE scheduler * make style * fix dummy * fix ndarray type annotation * replace returns a new state * added lms_discrete scheduler * use self.config * add_noise needs state * use config * use config * docstring * added flax score sde ve * fix imports * fix typos
-
- 08 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* Update black * update table
-
- 07 Sep, 2022 1 commit
-
-
Suraj Patil authored
fix saving embeds
-
- 05 Sep, 2022 2 commits
-
-
Patrick von Platen authored
* add outputs for models * add for pipelines * finish schedulers * better naming * adapt tests as well * replace dict access with . access * make schedulers works * finish * correct readme * make bcp compatible * up * small fix * finish * more fixes * more fixes * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/vae.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Adapt model outputs * Apply more suggestions * finish examples * correct Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Suraj Patil authored
use add_tokens
-
- 02 Sep, 2022 1 commit
-
-
Suraj Patil authored
* add textual inversion script * make the loop work * make coarse_loss optional * save pipeline after training * add arg pretrained_model_name_or_path * fix saving * fix gradient_accumulation_steps * style * fix progress bar steps * scale lr * add argument to accept style * remove unused args * scale lr using num gpus * load tokenizer using args * add checks when converting init token to id * improve commnets and style * document args * more cleanup * fix default adamw arsg * TextualInversionWrapper -> CLIPTextualInversionWrapper * fix tokenizer loading * Use the CLIPTextModel instead of wrapper * clean dataset * remove commented code * fix accessing grads for multi-gpu * more cleanup * fix saving on multi-GPU * init_placeholder_token_embeds * add seed * fix flip * fix multi-gpu * add utility methods in wrapper * remove ipynb * don't use wrapper * dont pass vae an dunet to accelerate prepare * bring back accelerator.accumulate * scale latents * use only one progress bar for steps * push_to_hub at the end of training * remove unused args * log some important stats * store args in tensorboard * pretty comments * save the trained embeddings * mobe the script up * add requirements file * more cleanup * fux typo * begin readme * style -> learnable_property * keep vae and unet in eval mode * address review comments * address more comments * removed unused args * add train command in readme * update readme
-