- 16 Sep, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
- 07 Sep, 2022 1 commit
-
-
Kashif Rasul authored
* initial score_sde_ve docs * fixed typo * fix VE term
-
- 06 Sep, 2022 1 commit
-
-
apolinario authored
-
- 05 Sep, 2022 1 commit
-
-
Patrick von Platen authored
* add outputs for models * add for pipelines * finish schedulers * better naming * adapt tests as well * replace dict access with . access * make schedulers works * finish * correct readme * make bcp compatible * up * small fix * finish * more fixes * more fixes * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/vae.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Adapt model outputs * Apply more suggestions * finish examples * correct Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 02 Sep, 2022 2 commits
-
-
Suraj Patil authored
-
Suraj Patil authored
* add textual inversion script * make the loop work * make coarse_loss optional * save pipeline after training * add arg pretrained_model_name_or_path * fix saving * fix gradient_accumulation_steps * style * fix progress bar steps * scale lr * add argument to accept style * remove unused args * scale lr using num gpus * load tokenizer using args * add checks when converting init token to id * improve commnets and style * document args * more cleanup * fix default adamw arsg * TextualInversionWrapper -> CLIPTextualInversionWrapper * fix tokenizer loading * Use the CLIPTextModel instead of wrapper * clean dataset * remove commented code * fix accessing grads for multi-gpu * more cleanup * fix saving on multi-GPU * init_placeholder_token_embeds * add seed * fix flip * fix multi-gpu * add utility methods in wrapper * remove ipynb * don't use wrapper * dont pass vae an dunet to accelerate prepare * bring back accelerator.accumulate * scale latents * use only one progress bar for steps * push_to_hub at the end of training * remove unused args * log some important stats * store args in tensorboard * pretty comments * save the trained embeddings * mobe the script up * add requirements file * more cleanup * fux typo * begin readme * style -> learnable_property * keep vae and unet in eval mode * address review comments * address more comments * removed unused args * add train command in readme * update readme
-