1. 25 Nov, 2024 1 commit
  2. 24 Nov, 2024 1 commit
  3. 19 Nov, 2024 1 commit
  4. 08 Nov, 2024 1 commit
  5. 06 Nov, 2024 1 commit
  6. 01 Nov, 2024 3 commits
  7. 31 Oct, 2024 1 commit
  8. 28 Oct, 2024 3 commits
  9. 25 Oct, 2024 1 commit
  10. 23 Oct, 2024 1 commit
  11. 22 Oct, 2024 1 commit
  12. 16 Oct, 2024 1 commit
  13. 15 Oct, 2024 1 commit
  14. 28 Sep, 2024 1 commit
  15. 16 Sep, 2024 1 commit
  16. 15 Sep, 2024 1 commit
  17. 14 Sep, 2024 1 commit
  18. 11 Sep, 2024 1 commit
  19. 05 Sep, 2024 1 commit
  20. 03 Sep, 2024 1 commit
  21. 26 Aug, 2024 1 commit
  22. 19 Aug, 2024 1 commit
  23. 18 Aug, 2024 1 commit
  24. 14 Aug, 2024 1 commit
  25. 12 Aug, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux Dreambooth LoRA] - te bug fixes & updates (#9139) · 413ca29b
      Linoy Tsaban authored
      * add requirements + fix link to bghira's guide
      
      * text ecnoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * style
      
      * add tests
      
      * fix encode_prompt call
      
      * style
      
      * unpack_latents test
      
      * fix lora saving
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * style
      
      * testing
      
      * style
      
      * testing
      
      * testing
      
      * style
      
      * fix sizing issue
      
      * style
      
      * revert scaling
      
      * style
      
      * style
      
      * scaling test
      
      * style
      
      * scaling test
      
      * remove model pred operation left from pre-conditioning
      
      * remove model pred operation left from pre-conditioning
      
      * fix trainable params
      
      * remove te2 from casting
      
      * transformer to accelerator
      
      * remove prints
      
      * empty commit
      413ca29b
  26. 09 Aug, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux] Dreambooth LoRA training scripts (#9086) · 65e30907
      Linoy Tsaban authored
      
      
      * initial commit - dreambooth for flux
      
      * update transformer to be FluxTransformer2DModel
      
      * update training loop and validation inference
      
      * fix sd3->flux docs
      
      * add guidance handling, not sure if it makes sense(?)
      
      * inital dreambooth lora commit
      
      * fix text_ids in compute_text_embeddings
      
      * fix imports of static methods
      
      * fix pipeline loading in readme, remove auto1111 docs for now
      
      * fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
      
      * Update examples/dreambooth/train_dreambooth_flux.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * fix te2 loading and remove te2 refs from text encoder training
      
      * fix tokenizer_2 initialization
      
      * remove text_encoder training refs from lora script (for now)
      
      * try with vae in bfloat16, fix model hook save
      
      * fix tokenization
      
      * fix static imports
      
      * fix CLIP import
      
      * remove text_encoder training refs (for now) from lora script
      
      * fix minor bug in encode_prompt, add guidance def in lora script, ...
      
      * fix unpack_latents args
      
      * fix license in readme
      
      * add "none" to weighting_scheme options for uniform sampling
      
      * style
      
      * adapt model saving - remove text encoder refs
      
      * adapt model loading - remove text encoder refs
      
      * initial commit for readme
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix vae casting
      
      * remove precondition_outputs
      
      * readme
      
      * readme
      
      * style
      
      * readme
      
      * readme
      
      * update weighting scheme default & docs
      
      * style
      
      * add text_encoder training to lora script, change vae_scale_factor value in both
      
      * style
      
      * text encoder training fixes
      
      * style
      
      * update readme
      
      * minor fixes
      
      * fix te params
      
      * fix te params
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65e30907
  27. 07 Aug, 2024 1 commit
  28. 03 Aug, 2024 1 commit
  29. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  30. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  31. 21 Jul, 2024 1 commit
  32. 05 Jul, 2024 1 commit
  33. 02 Jul, 2024 2 commits
  34. 25 Jun, 2024 1 commit