- 23 Jan, 2023 1 commit
-
-
Sayak Paul authored
* example on fine-tuning with LoRA. * apply make quality. * fix: pipeline loading. * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * apply suggestions for PR review. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * apply make style and make quality. * chore: remove mention of dreambooth from text2image. * add: weight path and wandb run link. * Apply suggestions from code review * apply make style. * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 20 Jan, 2023 2 commits
- 19 Jan, 2023 1 commit
-
-
Anton Lozhkov authored
* improve EMA * style * one EMA model * quality * fix tests * fix test * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * re organise the unconditional script * backwards compatibility * default to init values for some args * fix ort script * issubclass => isinstance * update state_dict * docstr * doc * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * use .to if device is passed * deprecate device * make flake happy * fix typo Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 04 Jan, 2023 2 commits
-
-
Yasyf Mohamedali authored
* Support training SD V2 with Flax Mostly involves supporting a v_prediction scheduler. The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead. * Add to other top-level files.
-
Manfred Lindmark authored
fix resume step in train_text_to_image example
-
- 02 Jan, 2023 2 commits
-
-
Pedro Cuenca authored
Fixes to the help for report_to in training scripts.
-
Suraj Patil authored
* misc fixes * more comments * Update examples/textual_inversion/textual_inversion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * set transformers verbosity to warning Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 30 Dec, 2022 2 commits
-
-
Suraj Patil authored
* allow using non-ema weights for training * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * address more review comment * reorganise a few lines * always pad text to max_length to match original training * ifx collate_fn * remove unused code * don't prepare ema_unet, don't register lr scheduler * style * assert => ValueError * add allow_tf32 * set log level * fix comment Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Pedro Cuenca authored
* Fix ema decay and clarify nomenclature. * Rename var.
-
- 27 Dec, 2022 2 commits
-
-
Katsuya authored
* Make xformers optional even if it is available * Raise exception if xformers is used but not available * Rename use_xformers to enable_xformers_memory_efficient_attention * Add a note about xformers in README * Reformat code style
-
Christopher Friesen authored
-
- 20 Dec, 2022 1 commit
-
-
Simon Kirsten authored
* [Flax] Stateless schedulers, fixes and refactors * Remove scheduling_common_flax and some renames * Update src/diffusers/schedulers/scheduling_pndm_flax.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 15 Dec, 2022 1 commit
-
-
Pedro Cuenca authored
* Add state checkpointing to other training scripts * Fix first_epoch * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update Dreambooth checkpoint help message. * Dreambooth docs: checkpoints, inference from a checkpoint. * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 10 Dec, 2022 1 commit
-
-
Pedro Cuenca authored
Remove spurious arg in training scripts.
-
- 09 Dec, 2022 2 commits
-
-
Patrick von Platen authored
* do not automatically enable xformers * uP
-
Haofan Wang authored
* Update requirements.txt * Update requirements_flax.txt * Update requirements.txt * Update requirements_flax.txt * Update requirements.txt * Update requirements_flax.txt
-
- 06 Dec, 2022 1 commit
-
-
Suraj Patil authored
* add check_min_version for examples * move __version__ to the top * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix comment * fix error_message * adapt the install message Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Dec, 2022 1 commit
-
-
Suraj Patil authored
us from_pretrained to load scheduler
-
- 02 Dec, 2022 1 commit
-
-
Pedro Gabriel Gengo Lourenço authored
Fixed doc to install from training packages
-
- 28 Nov, 2022 1 commit
-
-
Suraj Patil authored
* add get_velocity * add v prediction for training * fix saving * add revision arg * fix saving * save checkpoints dreambooth * fix saving embeds * add instruction in readme * quality * noise_pred -> model_pred
-
- 22 Nov, 2022 1 commit
-
-
Suraj Patil authored
* use accelerator to check mixed_precision * default `mixed_precision` to `None` * pass mixed_precision to accelerate launch
-
- 18 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* [Examples] Correct path * uP
-
- 07 Nov, 2022 1 commit
-
-
Duong A. Nguyen authored
load text encoder from subfolder
-
- 31 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 28 Oct, 2022 2 commits
-
-
Pedro Cuenca authored
* Update training and fine-tuning docs. * Update examples README. * Update README. * Add Flax fine-tuning section. * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Duong A. Nguyen authored
fix jnp dtype
-
- 27 Oct, 2022 2 commits
-
-
Suraj Patil authored
-
Duong A. Nguyen authored
* [Flax] Add finetune Stable Diffusion * temporary fix * drop_last and seed * add dtype for mixed precision training * style * Add Flax example
-
- 26 Oct, 2022 1 commit
-
-
Hu Ye authored
remove tensor_format in the new version
-
- 12 Oct, 2022 2 commits
-
-
pink-red authored
-
Suraj Patil authored
* fix ema * style * add comment about copy * style * quality
-
- 11 Oct, 2022 1 commit
-
-
Suraj Patil authored
* begin text2image script * loading the datasets, preprocessing & transforms * handle input features correctly * add gradient checkpointing support * fix output names * run unet in train mode not text encoder * use no_grad instead of freezing params * default max steps None * pad to longest * don't pad when tokenizing * fix encode on multi gpu * fix stupid bug * add random flip * add ema * fix ema * put ema on cpu * improve EMA model * contiguous_format * don't warp vae and text encode in accelerate * remove no_grad * use randn_like * fix resize * improve few things * log epoch loss * set log level * don't log each step * remove max_length from collate * style * add report_to option * make scale_lr false by default * add grad clipping * add an option to use 8bit adam * fix logging in multi-gpu, log every step * more comments * remove eval for now * adress review comments * add requirements file * begin readme * begin readme * fix typo * fix push to hub * populate readme * update readme * remove use_auth_token from the script * address some review comments * better mixed precision support * remove redundant to * create ema model early * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * better description for train_data_dir * add diffusers in requirements * update dataset_name_mapping * update readme * add inference example Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-