1. 28 Sep, 2024 2 commits
  2. 27 Sep, 2024 1 commit
    • PromeAI's avatar
      [examples] add train flux-controlnet scripts in example. (#9324) · 534848c3
      PromeAI authored
      
      
      * add train flux-controlnet scripts in example.
      
      * fix error
      
      * fix subfolder error
      
      * fix preprocess error
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix readme
      
      * fix note error
      
      * add some Tutorial for deepspeed
      
      * fix some Format Error
      
      * add dataset_path example
      
      * remove print, add guidance_scale CLI, readable apply
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * update,push_to_hub,save_weight_dtype,static method,clear_objs_and_retain_memory,report_to=wandb
      
      * add push to hub in readme
      
      * apply weighting schemes
      
      * add note
      
      * Update examples/controlnet/README_flux.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * make code style and quality
      
      * fix some unnoticed error
      
      * make code style and quality
      
      * add example controlnet in readme
      
      * add test controlnet
      
      * rm Remove duplicate notes
      
      * Fix formatting errors
      
      * add new control image
      
      * add model cpu offload
      
      * update help for adafactor
      
      * make quality & style
      
      * make quality and style
      
      * rename flux_controlnet_model_name_or_path
      
      * fix back src/diffusers/pipelines/flux/pipeline_flux_controlnet.py
      
      * fix dtype error by pre calculate text emb
      
      * rm image save
      
      * quality fix
      
      * fix test
      
      * fix tiny flux train error
      
      * change report to to tensorboard
      
      * fix save name error when test
      
      * Fix shrinking errors
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYour Name <you@example.com>
      534848c3
  3. 25 Sep, 2024 1 commit
  4. 23 Sep, 2024 1 commit
  5. 19 Sep, 2024 1 commit
    • Aryan's avatar
      [training] CogVideoX Lora (#9302) · 2b443a5d
      Aryan authored
      
      
      * cogvideox lora training draft
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * make fix-copies
      
      * update
      
      * update
      
      * apply suggestions from review
      
      * apply suggestions from reveiw
      
      * fix typo
      
      * Update examples/cogvideo/train_cogvideox_lora.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * fix lora alpha
      
      * use correct lora scaling for final test pipeline
      
      * Update examples/cogvideo/train_cogvideox_lora.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * apply suggestions from review; prodigy optimizer
      
      YiYi Xu <yixu310@gmail.com>
      
      * add tests
      
      * make style
      
      * add README
      
      * update
      
      * update
      
      * make style
      
      * fix
      
      * update
      
      * add test skeleton
      
      * revert lora utils changes
      
      * add cleaner modifications to lora testing utils
      
      * update lora tests
      
      * deepspeed stuff
      
      * add requirements.txt
      
      * deepspeed refactor
      
      * add lora stuff to img2vid pipeline to fix tests
      
      * fight tests
      
      * add co-authors
      Co-Authored-By: default avatarFu-Yun Wang <1697256461@qq.com>
      Co-Authored-By: default avatarzR <2448370773@qq.com>
      
      * fight lora runner tests
      
      * import Dummy optim and scheduler only wheh required
      
      * update docs
      
      * add coauthors
      Co-Authored-By: default avatarFu-Yun Wang <1697256461@qq.com>
      
      * remove option to train text encoder
      Co-Authored-By: default avatarbghira <bghira@users.github.com>
      
      * update tests
      
      * fight more tests
      
      * update
      
      * fix vid2vid
      
      * fix typo
      
      * remove lora tests; todo in follow-up PR
      
      * undo img2vid changes
      
      * remove text encoder related changes in lora loader mixin
      
      * Revert "remove text encoder related changes in lora loader mixin"
      
      This reverts commit f8a8444487db27859be812866db4e8cec7f25691.
      
      * update
      
      * round 1 of fighting tests
      
      * round 2 of fighting tests
      
      * fix copied from comment
      
      * fix typo in lora test
      
      * update styling
      Co-Authored-By: default avatarYiYi Xu <yixu310@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarzR <2448370773@qq.com>
      Co-authored-by: default avatarFu-Yun Wang <1697256461@qq.com>
      Co-authored-by: default avatarbghira <bghira@users.github.com>
      2b443a5d
  6. 18 Sep, 2024 1 commit
  7. 16 Sep, 2024 2 commits
  8. 15 Sep, 2024 1 commit
  9. 14 Sep, 2024 1 commit
  10. 12 Sep, 2024 1 commit
  11. 11 Sep, 2024 2 commits
  12. 05 Sep, 2024 1 commit
  13. 03 Sep, 2024 1 commit
  14. 29 Aug, 2024 1 commit
  15. 26 Aug, 2024 1 commit
  16. 20 Aug, 2024 1 commit
  17. 19 Aug, 2024 2 commits
  18. 18 Aug, 2024 1 commit
  19. 14 Aug, 2024 3 commits
  20. 12 Aug, 2024 2 commits
    • Linoy Tsaban's avatar
      [Flux Dreambooth LoRA] - te bug fixes & updates (#9139) · 413ca29b
      Linoy Tsaban authored
      * add requirements + fix link to bghira's guide
      
      * text ecnoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * style
      
      * add tests
      
      * fix encode_prompt call
      
      * style
      
      * unpack_latents test
      
      * fix lora saving
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * style
      
      * testing
      
      * style
      
      * testing
      
      * testing
      
      * style
      
      * fix sizing issue
      
      * style
      
      * revert scaling
      
      * style
      
      * style
      
      * scaling test
      
      * style
      
      * scaling test
      
      * remove model pred operation left from pre-conditioning
      
      * remove model pred operation left from pre-conditioning
      
      * fix trainable params
      
      * remove te2 from casting
      
      * transformer to accelerator
      
      * remove prints
      
      * empty commit
      413ca29b
    • Dibbla!'s avatar
      Errata - fix typo (#9100) · 3ece1433
      Dibbla! authored
      3ece1433
  21. 09 Aug, 2024 2 commits
    • Daniel Socek's avatar
      Fix textual inversion SDXL and add support for 2nd text encoder (#9010) · c1079f08
      Daniel Socek authored
      
      
      * Fix textual inversion SDXL and add support for 2nd text encoder
      Signed-off-by: default avatarDaniel Socek <daniel.socek@intel.com>
      
      * Fix style/quality of text inv for sdxl
      Signed-off-by: default avatarDaniel Socek <daniel.socek@intel.com>
      
      ---------
      Signed-off-by: default avatarDaniel Socek <daniel.socek@intel.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      c1079f08
    • Linoy Tsaban's avatar
      [Flux] Dreambooth LoRA training scripts (#9086) · 65e30907
      Linoy Tsaban authored
      
      
      * initial commit - dreambooth for flux
      
      * update transformer to be FluxTransformer2DModel
      
      * update training loop and validation inference
      
      * fix sd3->flux docs
      
      * add guidance handling, not sure if it makes sense(?)
      
      * inital dreambooth lora commit
      
      * fix text_ids in compute_text_embeddings
      
      * fix imports of static methods
      
      * fix pipeline loading in readme, remove auto1111 docs for now
      
      * fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
      
      * Update examples/dreambooth/train_dreambooth_flux.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * fix te2 loading and remove te2 refs from text encoder training
      
      * fix tokenizer_2 initialization
      
      * remove text_encoder training refs from lora script (for now)
      
      * try with vae in bfloat16, fix model hook save
      
      * fix tokenization
      
      * fix static imports
      
      * fix CLIP import
      
      * remove text_encoder training refs (for now) from lora script
      
      * fix minor bug in encode_prompt, add guidance def in lora script, ...
      
      * fix unpack_latents args
      
      * fix license in readme
      
      * add "none" to weighting_scheme options for uniform sampling
      
      * style
      
      * adapt model saving - remove text encoder refs
      
      * adapt model loading - remove text encoder refs
      
      * initial commit for readme
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix vae casting
      
      * remove precondition_outputs
      
      * readme
      
      * readme
      
      * style
      
      * readme
      
      * readme
      
      * update weighting scheme default & docs
      
      * style
      
      * add text_encoder training to lora script, change vae_scale_factor value in both
      
      * style
      
      * text encoder training fixes
      
      * style
      
      * update readme
      
      * minor fixes
      
      * fix te params
      
      * fix te params
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65e30907
  22. 08 Aug, 2024 2 commits
  23. 07 Aug, 2024 1 commit
  24. 05 Aug, 2024 2 commits
  25. 04 Aug, 2024 1 commit
  26. 03 Aug, 2024 1 commit
  27. 30 Jul, 2024 1 commit
  28. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  29. 25 Jul, 2024 2 commits