- 06 Aug, 2023 1 commit
-
-
takuoko authored
* add train_text_to_image_lora_sdxl.py * add train_text_to_image_lora_sdxl.py * add test and minor fix * Update examples/text_to_image/README_sdxl.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix unwrap_model rule * add invisible-watermark in requirements * del invisible-watermark * Update examples/text_to_image/README_sdxl.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update examples/text_to_image/README_sdxl.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update examples/text_to_image/train_text_to_image_lora_sdxl.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * del comment & update readme --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 20 Jun, 2023 1 commit
-
-
Sayak Paul authored
* refactor: readme serialized from the example when push_to_hub is True. * fix: batch size arg. * a bit better formatting * minor fixes. * add note on env. * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * condition wandb info better * make mixed_precision assignment in cli args explicit. * separate inference block for sample images. * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * address more comments. * autocast mode. * correct none image type problem. * ifx: list assignment. * minor fix. --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 16 Jun, 2023 1 commit
-
-
Will Berman authored
add note to loading from checkpoint
-
- 28 Apr, 2023 1 commit
-
-
Sayak Paul authored
*
👽 qol improvements for LoRA. * better function name? * fix: LoRA weight loading with the new format. * address Patrick's comments. * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> * change wording around encouraging the use of load_lora_weights(). * fix: function name. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Apr, 2023 1 commit
-
-
Sayak Paul authored
* feat: verfication of multi-gpu support for select examples. * add: multi-gpu training sections to the relvant doc pages.
-
- 06 Apr, 2023 1 commit
-
-
Sayak Paul authored
* improve stable unclip doc. * feat: support for applying min-snr weighting for faster convergence. * add: support for validation logging with wandb * make not a required arg. * fix: arg name. * fix: cli args. * fix: tracker config. * fix: loss calculation. * fix: validation logging. * fix: unwrap call. * fix: validation logging. * fix: internval. * fix: checkpointing push to hub. * fix: https://github.com/huggingface/diffusers/commit/c8a2856c6d5e45577bf4c24dee06b1a4a2f5c050\#commitcomment-106913193 * fix: norm group test for UNet3D. * address PR comments. * remove unneeded code. * add: entry in the readme and docs. * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> --------- Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 23 Mar, 2023 1 commit
-
-
Mishig authored
-
- 07 Mar, 2023 1 commit
-
-
zxypro authored
-
- 06 Feb, 2023 1 commit
-
-
Pedro Cuenca authored
-
- 31 Jan, 2023 1 commit
-
-
Sayak Paul authored
* Update README.md * Update README.md
-
- 25 Jan, 2023 1 commit
-
-
Sayak Paul authored
* add: a doc on LoRA support in diffusers. * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * apply PR suggestions. * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove visually incoherent elements. Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 23 Jan, 2023 1 commit
-
-
Sayak Paul authored
* example on fine-tuning with LoRA. * apply make quality. * fix: pipeline loading. * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * apply suggestions for PR review. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * apply make style and make quality. * chore: remove mention of dreambooth from text2image. * add: weight path and wandb run link. * Apply suggestions from code review * apply make style. * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 27 Dec, 2022 1 commit
-
-
Katsuya authored
* Make xformers optional even if it is available * Raise exception if xformers is used but not available * Rename use_xformers to enable_xformers_memory_efficient_attention * Add a note about xformers in README * Reformat code style
-
- 06 Dec, 2022 1 commit
-
-
Suraj Patil authored
* add check_min_version for examples * move __version__ to the top * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix comment * fix error_message * adapt the install message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 02 Dec, 2022 1 commit
-
-
Pedro Gabriel Gengo Lourenço authored
Fixed doc to install from training packages
-
- 28 Nov, 2022 1 commit
-
-
Suraj Patil authored
* add get_velocity * add v prediction for training * fix saving * add revision arg * fix saving * save checkpoints dreambooth * fix saving embeds * add instruction in readme * quality * noise_pred -> model_pred
-
- 22 Nov, 2022 1 commit
-
-
Suraj Patil authored
* use accelerator to check mixed_precision * default `mixed_precision` to `None` * pass mixed_precision to accelerate launch
-
- 28 Oct, 2022 1 commit
-
-
Pedro Cuenca authored
* Update training and fine-tuning docs. * Update examples README. * Update README. * Add Flax fine-tuning section. * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 27 Oct, 2022 2 commits
-
-
Suraj Patil authored
-
Duong A. Nguyen authored
* [Flax] Add finetune Stable Diffusion * temporary fix * drop_last and seed * add dtype for mixed precision training * style * Add Flax example
-
- 11 Oct, 2022 1 commit
-
-
Suraj Patil authored
* begin text2image script * loading the datasets, preprocessing & transforms * handle input features correctly * add gradient checkpointing support * fix output names * run unet in train mode not text encoder * use no_grad instead of freezing params * default max steps None * pad to longest * don't pad when tokenizing * fix encode on multi gpu * fix stupid bug * add random flip * add ema * fix ema * put ema on cpu * improve EMA model * contiguous_format * don't warp vae and text encode in accelerate * remove no_grad * use randn_like * fix resize * improve few things * log epoch loss * set log level * don't log each step * remove max_length from collate * style * add report_to option * make scale_lr false by default * add grad clipping * add an option to use 8bit adam * fix logging in multi-gpu, log every step * more comments * remove eval for now * adress review comments * add requirements file * begin readme * begin readme * fix typo * fix push to hub * populate readme * update readme * remove use_auth_token from the script * address some review comments * better mixed precision support * remove redundant to * create ema model early * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * better description for train_data_dir * add diffusers in requirements * update dataset_name_mapping * update readme * add inference example Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-