"docs/vscode:/vscode.git/clone" did not exist on "df91c44712381c021c0f4855a623b1a1c32f28b7"
- 28 Sep, 2024 1 commit
-
-
Sayak Paul authored
* fix: retain memory utility. * fix * quality * free_memory.
-
- 03 Sep, 2024 1 commit
-
-
Sayak Paul authored
-
- 27 Aug, 2024 1 commit
-
-
Marçal Comajoan Cara authored
to avoid "FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations" Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Jun, 2024 1 commit
-
-
drhead authored
Add extra performance features for EMAModel, torch._foreach operations and better support for non-blocking CPU offloading (#7685) * Add support for _foreach operations and non-blocking to EMAModel * default foreach to false * add non-blocking EMA offloading to SD1.5 T2I example script * fix whitespace * move foreach to cli argument * linting * Update README.md re: EMA weight training * correct args.foreach_ema * add tests for foreach ema * code quality * add foreach to from_pretrained * default foreach false * fix linting --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
drhead <a@a.a>
-
- 18 Jun, 2024 1 commit
-
-
Sayak Paul authored
refactor the density and weighting utilities.
-
- 01 Jun, 2024 1 commit
-
-
Co-authored-by: Jimmy <39@
🇺🇸 .com> Co-authored-by:Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 27 Apr, 2024 1 commit
-
-
A new function compute_dream_and_update_latents has been added to the training utilities that allows you to do DREAM rectified training in line with the paper https://arxiv.org/abs/2312.00210. The method can be used with an extra argument in the train_text_to_image.py script. Co-authored-by: Jimmy <39@
🇺🇸 .com>
-
- 09 Mar, 2024 1 commit
-
-
Mengqing Cao authored
* Add npu support * fix for code quality check * fix for code quality check
-
- 01 Feb, 2024 1 commit
-
-
YiYi Xu authored
* add * remove transformer --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 31 Jan, 2024 1 commit
-
-
Patrick von Platen authored
-
- 15 Jan, 2024 1 commit
-
-
Sayak Paul authored
create a utility for casting the lora params during training.
-
- 12 Jan, 2024 1 commit
-
-
Sayak Paul authored
* fix: training resume from fp16. * add: comment * remove residue from another branch. * remove more residues. * thanks to Younes; no hacks. * style. * clean things a bit and modularize _set_state_dict_into_text_encoder * add comment about the fix detailed.
-
- 05 Jan, 2024 1 commit
-
-
dg845 authored
* Make WDS pipeline interpolation type configurable. * Make the VAE encoding batch size configurable. * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts. * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable. * Make LoRA target modules configurable for LCM-LoRA scripts. * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script. * apply suggestions from review
-
- 02 Dec, 2023 1 commit
-
-
Sayak Paul authored
* fix: duplicate unet prefix problem. * Update src/diffusers/loaders/lora.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 11 Oct, 2023 1 commit
-
-
Sayak Paul authored
* use loralinear instead of depecrecated lora attn procs. * fix parameters() * fix saving * add back support for add kv proj. * fix: param accumul,ation. * propagate the changes.
-
- 27 Sep, 2023 1 commit
-
-
Sayak Paul authored
add compute_snr() to training utils.
-
- 22 Jun, 2023 1 commit
-
-
Sayak Paul authored
-
- 22 May, 2023 1 commit
-
-
Patrick von Platen authored
* up * fix more * Apply suggestions from code review * fix more * fix more * Check it * Remove 16:8 * fix more * fix more * fix more * up * up * Test only stable diffusion * Test only two files * up * Try out spinning up processes that can be killed * up * Apply suggestions from code review * up * up
-
- 11 May, 2023 1 commit
-
-
Stas Bekman authored
* [deepspeed] partial ZeRO-3 support * cleanup * improve deepspeed fixes * Improve * make style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Mar, 2023 1 commit
-
-
Will Berman authored
-
- 16 Feb, 2023 1 commit
-
-
Sayak Paul authored
* add store and restore() methods to EMAModel. * Update src/diffusers/training_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * make style with doc builder * remove explicit listing. * Apply suggestions from code review Co-authored-by:
Will Berman <wlbberman@gmail.com> * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * chore: better variable naming. * better treatment of temp_stored_params Co-authored-by:
patil-suraj <surajp815@gmail.com> * make style * remove temporary params from earth
🌎 * make fix-copies. --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> Co-authored-by:
patil-suraj <surajp815@gmail.com>
-
- 08 Feb, 2023 1 commit
-
-
Chenguo Lin authored
* EMA: fix `state_dict()` & add `cur_decay_value` * EMA: fix a bug in `load_state_dict()` 'float' object (`state_dict["power"]`) has no attribute 'get'. * del train_unconditional_ort.py
-
- 07 Feb, 2023 1 commit
-
-
Patrick von Platen authored
* better accelerated saving * up * finish * finish * uP * up * up * fix * Apply suggestions from code review * correct ema * Remove @ * up * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/training/dreambooth.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 29 Jan, 2023 1 commit
-
- 19 Jan, 2023 1 commit
-
-
Anton Lozhkov authored
* improve EMA * style * one EMA model * quality * fix tests * fix test * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * re organise the unconditional script * backwards compatibility * default to init values for some args * fix ort script * issubclass => isinstance * update state_dict * docstr * doc * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * use .to if device is passed * deprecate device * make flake happy * fix typo Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Jan, 2023 1 commit
-
-
YiYi Xu authored
* add a doc page for each pipeline under api/pipelines/stable_diffusion * add pipeline examples to docstrings * updated stable_diffusion_2 page * updated default markdown syntax to list methods based on https://github.com/huggingface/diffusers/pull/1870 * add function decorator Co-authored-by:
yiyixuxu <yixu@Yis-MacBook-Pro.lan> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 28 Jul, 2022 1 commit
-
-
Patrick von Platen authored
* [Vae and AutoencoderKL clean] * save intermediate finished work * more progress * more progress * finish modeling code * save intermediate * finish * Correct tests
-
- 27 Jul, 2022 1 commit
-
-
Anton Lozhkov authored
* Add torch_device to the VE pipeline * Mark the training test with slow
-
- 04 Jul, 2022 2 commits
-
-
Anton Lozhkov authored
* Catch unused params in DDP * Fix proj_out, add test
-
Tanishq Abraham authored
ema model stepping done automatically now
-
- 27 Jun, 2022 2 commits
-
-
Patrick von Platen authored
-
anton-l authored
-