"tests/distributed/test_mp_dataloader.py" did not exist on "40950629b2b8f639d46a7efea0f40557108fbdcf"
- 25 Jul, 2024 3 commits
-
-
asfiyab-nvidia authored
* Update TensorRT img2img pipeline Signed-off-by:
Asfiya Baig <asfiyab@nvidia.com> * Update TensorRT version installed Signed-off-by:
Asfiya Baig <asfiyab@nvidia.com> * make style and quality Signed-off-by:
Asfiya Baig <asfiyab@nvidia.com> * Update examples/community/stable_diffusion_tensorrt_img2img.py Co-authored-by:
Tolga Cangöz <46008593+tolgacangoz@users.noreply.github.com> * Update examples/community/README.md Co-authored-by:
Tolga Cangöz <46008593+tolgacangoz@users.noreply.github.com> * Apply style and quality using ruff 0.1.5 Signed-off-by:
Asfiya Baig <asfiyab@nvidia.com> --------- Signed-off-by:
Asfiya Baig <asfiyab@nvidia.com> Co-authored-by:
Tolga Cangöz <46008593+tolgacangoz@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space * rewrite fuse_lora a bit. * feedback * copy over load_lora_into_text_encoder. * address dhruv's feedback. * fix-copies * fix issubclass. * num_fused_loras * fix * fix * remove mapping * up * fix * style * fix-copies * change to SD3TransformerLoRALoadersMixin * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up * handle wuerstchen * up * move lora to lora_pipeline.py * up * fix-copies * fix documentation. * comment set_adapters(). * fix-copies * fix set_adapters() at the model level. * fix? * fix --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
- 23 Jul, 2024 2 commits
-
-
Dhruv Nair authored
update
-
akbaig authored
Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 21 Jul, 2024 1 commit
-
-
Sayak Paul authored
* SD3 training fixes Co-authored-by:
bghira <59658056+bghira@users.noreply.github.com> * rewrite noise addition part to respect the eqn. * styler * Update examples/dreambooth/README_sd3.md Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com> --------- Co-authored-by:
bghira <59658056+bghira@users.noreply.github.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com>
-
- 18 Jul, 2024 1 commit
-
-
Sayak Paul authored
* remove resume_download * fix: _fetch_index_file call. * remove resume_download from docs.
-
- 17 Jul, 2024 1 commit
-
-
Tolga Cangöz authored
* Fix multi-gpu case * Prefer previously created `unwrap_model()` function For `torch.compile()` generalizability * `chore: update unwrap_model() function to use accelerator.unwrap_model()`
-
- 12 Jul, 2024 1 commit
-
-
ustcuna authored
* add animatediff_ipex community pipeline * address the 1st round review comments
-
- 08 Jul, 2024 1 commit
-
-
Tolga Cangöz authored
* Remove unused line --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 05 Jul, 2024 2 commits
-
-
apolinário authored
* Improve trainer model cards * Update train_dreambooth_sd3.py * Update train_dreambooth_lora_sd3.py * add link to adapters loading doc * Update train_dreambooth_lora_sd3.py --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Dhruv Nair authored
* update * update * update
-
- 04 Jul, 2024 1 commit
-
-
Thomas Eding authored
* Add vae_roundtrip.py example * Add cuda support to vae_roundtrip * Move vae_roundtrip.py into research_projects/vae * Fix channel scaling in vae roundrip and also support taesd. * Apply ruff --fix for CI gatekeep check --------- Co-authored-by:Álvaro Somoza <asomoza@users.noreply.github.com>
-
- 03 Jul, 2024 5 commits
-
-
Linoy Tsaban authored
* add clip_skip * style * smol fix --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
-
Sayak Paul authored
* add experimental scripts to train SD3 transformer lora on colab * add readme * add colab * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * fix link in the notebook. --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Sayak Paul authored
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670)" This reverts commit a2071a18.
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space
-
- 02 Jul, 2024 3 commits
-
-
Dhruv Nair authored
update
-
Sayak Paul authored
* add a test suite for SD3 DreamBooth * lora suite * style * add checkpointing tests for LoRA * add test to cover train_text_encoder.
-
Álvaro Somoza authored
* fix * fix things. Co-authored-by:
Linoy Tsaban <linoy.tsaban@gmail.com> * remove patch * apply suggestions --------- Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <linoy.tsaban@gmail.com>
-
- 01 Jul, 2024 2 commits
-
-
WenheLI authored
* update training * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Bhavay Malhotra authored
[train_controlnet_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env (#8476) * Create diffusers.yml * num_train_epochs --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 29 Jun, 2024 1 commit
-
-
Álvaro Somoza authored
* new pipeline
-
- 27 Jun, 2024 1 commit
-
-
Linoy Tsaban authored
* minor changes * minor changes * minor changes * minor changes * minor changes * minor changes * minor changes * fix * fix * aligning with blora script * aligning with blora script * aligning with blora script * aligning with blora script * aligning with blora script * remove prints * style * default val * license * move save_model_card to outside push_to_hub * Update train_dreambooth_lora_sdxl_advanced.py --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 25 Jun, 2024 2 commits
-
-
Linoy Tsaban authored
* add clip text-encoder training * no dora * text encoder traing fixes * text encoder traing fixes * text encoder training fixes * text encoder training fixes * text encoder training fixes * text encoder training fixes * add text_encoder layers to save_lora * style * fix imports * style * fix text encoder * review changes * review changes * review changes * minor change * add lora tag * style * add readme notes * add tests for clip encoders * style * typo * fixes * style * Update tests/lora/test_lora_layers_sd3.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/README_sd3.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * minor readme change --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Hammond Liu authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Jun, 2024 7 commits
-
-
Tolga Cangöz authored
* Class methods are supposed to use `cls` conventionally * `make style && make quality` * An Empty commit --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Tolga Cangöz authored
* Fix typos * Fix typos & up style * chore: Update numbers --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Tolga Cangöz authored
* Discourage using `revision` * `make style && make quality` * Refactor code to use 'variant' instead of 'revision' * `revision="bf16"` -> `variant="bf16"` --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Tolga Cangöz authored
* Trim all the trailing white space in the whole repo * Remove unnecessary empty places * make style && make quality * Trim trailing white space * trim --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Tolga Cangöz authored
* Fix typos & improve contributing page * `make style && make quality` * fix typos * Fix typo --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Vinh H. Pham authored
[train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env (#8446) fix num_train_epochs Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
drhead authored
Add extra performance features for EMAModel, torch._foreach operations and better support for non-blocking CPU offloading (#7685) * Add support for _foreach operations and non-blocking to EMAModel * default foreach to false * add non-blocking EMA offloading to SD1.5 T2I example script * fix whitespace * move foreach to cli argument * linting * Update README.md re: EMA weight training * correct args.foreach_ema * add tests for foreach ema * code quality * add foreach to from_pretrained * default foreach false * fix linting --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
drhead <a@a.a>
-
- 21 Jun, 2024 1 commit
-
-
Sayak Paul authored
* get rid of the legacy lora remnants and make our codebase lighter * fix depcrecated lora argument * fix * empty commit to trigger ci * remove print * empty
-
- 20 Jun, 2024 1 commit
-
-
satani99 authored
* Update train_dreambooth_lora_sd3.py * Update train_dreambooth_lora_sd3.py * Update train_dreambooth_sd3.py --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 19 Jun, 2024 1 commit
-
-
Sayak Paul authored
* change to logit_normal as the weighting scheme * sensible default mote
-
- 18 Jun, 2024 3 commits
-
-
Sayak Paul authored
fix the position of param casting when loading them
-
Sayak Paul authored
refactor the density and weighting utilities.
-
Bagheera authored
Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com>
-