1. 07 Jan, 2025 1 commit
  2. 24 Dec, 2024 1 commit
  3. 23 Dec, 2024 1 commit
  4. 15 Dec, 2024 1 commit
  5. 12 Dec, 2024 1 commit
    • Sayak Paul's avatar
      [WIP][Training] Flux Control LoRA training script (#10130) · 8170dc36
      Sayak Paul authored
      
      
      * update
      
      * add
      
      * update
      
      * add control-lora conversion script; make flux loader handle norms; fix rank calculation assumption
      
      * control lora updates
      
      * remove copied-from
      
      * create separate pipelines for flux control
      
      * make fix-copies
      
      * update docs
      
      * add tests
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * remove control lora changes
      
      * apply suggestions from review
      
      * Revert "remove control lora changes"
      
      This reverts commit 73cfc519c9b99b7dc3251cc6a90a5db3056c4819.
      
      * update
      
      * update
      
      * improve log messages
      
      * updates.
      
      * updates
      
      * support register_config.
      
      * fix
      
      * fix
      
      * fix
      
      * updates
      
      * updates
      
      * updates
      
      * fix-copies
      
      * fix
      
      * apply suggestions from review
      
      * add tests
      
      * remove conversion script; enable on-the-fly conversion
      
      * bias -> lora_bias.
      
      * fix-copies
      
      * peft.py
      
      * fix lora conversion
      
      * changes
      Co-authored-by: default avatara-r-r-o-w <contact.aryanvs@gmail.com>
      
      * fix-copies
      
      * updates for tests
      
      * fix
      
      * alpha_pattern.
      
      * add a test for varied lora ranks and alphas.
      
      * revert changes in num_channels_latents = self.transformer.config.in_channels // 8
      
      * revert moe
      
      * add a sanity check on unexpected keys when loading norm layers.
      
      * contro lora.
      
      * fixes
      
      * fixes
      
      * fixes
      
      * tests
      
      * reviewer feedback
      
      * fix
      
      * proper peft version for lora_bias
      
      * fix-copies
      
      * updates
      
      * updates
      
      * updates
      
      * remove debug code
      
      * update docs
      
      * integration tests
      
      * nis
      
      * fuse and unload.
      
      * fix
      
      * add slices.
      
      * more updates.
      
      * button up readme
      
      * train()
      
      * add full fine-tuning version.
      
      * fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      
      * set_grads_to_none remove.
      
      * readme
      
      ---------
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatara-r-r-o-w <contact.aryanvs@gmail.com>
      8170dc36
  6. 01 Nov, 2024 1 commit
  7. 25 Oct, 2024 1 commit
  8. 22 Oct, 2024 1 commit
  9. 28 Sep, 2024 1 commit
  10. 27 Sep, 2024 1 commit
    • PromeAI's avatar
      [examples] add train flux-controlnet scripts in example. (#9324) · 534848c3
      PromeAI authored
      
      
      * add train flux-controlnet scripts in example.
      
      * fix error
      
      * fix subfolder error
      
      * fix preprocess error
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix readme
      
      * fix note error
      
      * add some Tutorial for deepspeed
      
      * fix some Format Error
      
      * add dataset_path example
      
      * remove print, add guidance_scale CLI, readable apply
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * update,push_to_hub,save_weight_dtype,static method,clear_objs_and_retain_memory,report_to=wandb
      
      * add push to hub in readme
      
      * apply weighting schemes
      
      * add note
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * make code style and quality
      
      * fix some unnoticed error
      
      * make code style and quality
      
      * add example controlnet in readme
      
      * add test controlnet
      
      * rm Remove duplicate notes
      
      * Fix formatting errors
      
      * add new control image
      
      * add model cpu offload
      
      * update help for adafactor
      
      * make quality & style
      
      * make quality and style
      
      * rename flux_controlnet_model_name_or_path
      
      * fix back src/diffusers/pipelines/flux/pipeline_flux_controlnet.py
      
      * fix dtype error by pre calculate text emb
      
      * rm image save
      
      * quality fix
      
      * fix test
      
      * fix tiny flux train error
      
      * change report to to tensorboard
      
      * fix save name error when test
      
      * Fix shrinking errors
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYour Name <you@example.com>
      534848c3
  11. 14 Aug, 2024 1 commit
  12. 01 Jul, 2024 1 commit
  13. 13 Jun, 2024 1 commit
  14. 29 May, 2024 1 commit
  15. 03 May, 2024 1 commit
  16. 02 Apr, 2024 1 commit
    • Bagheera's avatar
      7529 do not disable autocast for cuda devices (#7530) · 8e963d1c
      Bagheera authored
      
      
      * 7529 do not disable autocast for cuda devices
      
      * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
      
      * add autocast fix to other training examples
      
      * disable native_amp for dreambooth (sdxl)
      
      * disable native_amp for pix2pix (sdxl)
      
      * remove tests from remaining files
      
      * disable native_amp on huggingface accelerator for every training example that uses it
      
      * convert more usages of autocast to nullcontext, make style fixes
      
      * make style fixes
      
      * style.
      
      * Empty-Commit
      
      ---------
      Co-authored-by: default avatarbghira <bghira@users.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e963d1c
  17. 28 Mar, 2024 1 commit
  18. 18 Mar, 2024 1 commit
  19. 13 Mar, 2024 1 commit
  20. 04 Mar, 2024 1 commit
    • Linoy Tsaban's avatar
      [training scripts] add tags of diffusers-training (#7206) · 8da360aa
      Linoy Tsaban authored
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add dora tags for drambooth lora scripts
      
      * style
      8da360aa
  21. 27 Feb, 2024 1 commit
  22. 09 Feb, 2024 3 commits
  23. 08 Feb, 2024 1 commit
  24. 12 Jan, 2024 1 commit
  25. 05 Jan, 2024 1 commit
  26. 01 Dec, 2023 1 commit
  27. 27 Nov, 2023 1 commit
  28. 10 Nov, 2023 1 commit
  29. 06 Nov, 2023 1 commit
  30. 14 Sep, 2023 1 commit
  31. 08 Sep, 2023 1 commit
  32. 17 Aug, 2023 1 commit
  33. 12 Aug, 2023 1 commit
  34. 04 Aug, 2023 1 commit
  35. 27 Jul, 2023 1 commit
  36. 26 Jul, 2023 2 commits
  37. 25 Jul, 2023 1 commit
    • Sayak Paul's avatar
      [ControlNet SDXL training] fixes in the training script (#4223) · fed12376
      Sayak Paul authored
      * fix: #4206
      
      * add: sdxl controlnet training smoketest.
      
      * remove unnecessary token inits.
      
      * add: licensing to model card.
      
      * include SDXL licensing in the model card and make public visibility default
      
      * debugging
      
      * debugging
      
      * disable local file download.
      
      * fix: training test.
      
      * fix: ckpt prefix.
      fed12376