1. 21 Aug, 2024 1 commit
  2. 20 Aug, 2024 5 commits
  3. 19 Aug, 2024 9 commits
  4. 18 Aug, 2024 2 commits
  5. 17 Aug, 2024 3 commits
  6. 16 Aug, 2024 4 commits
  7. 15 Aug, 2024 1 commit
  8. 14 Aug, 2024 3 commits
  9. 13 Aug, 2024 3 commits
  10. 12 Aug, 2024 4 commits
  11. 10 Aug, 2024 1 commit
  12. 09 Aug, 2024 2 commits
    • Daniel Socek's avatar
      Fix textual inversion SDXL and add support for 2nd text encoder (#9010) · c1079f08
      Daniel Socek authored
      
      
      * Fix textual inversion SDXL and add support for 2nd text encoder
      Signed-off-by: default avatarDaniel Socek <daniel.socek@intel.com>
      
      * Fix style/quality of text inv for sdxl
      Signed-off-by: default avatarDaniel Socek <daniel.socek@intel.com>
      
      ---------
      Signed-off-by: default avatarDaniel Socek <daniel.socek@intel.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      c1079f08
    • Linoy Tsaban's avatar
      [Flux] Dreambooth LoRA training scripts (#9086) · 65e30907
      Linoy Tsaban authored
      
      
      * initial commit - dreambooth for flux
      
      * update transformer to be FluxTransformer2DModel
      
      * update training loop and validation inference
      
      * fix sd3->flux docs
      
      * add guidance handling, not sure if it makes sense(?)
      
      * inital dreambooth lora commit
      
      * fix text_ids in compute_text_embeddings
      
      * fix imports of static methods
      
      * fix pipeline loading in readme, remove auto1111 docs for now
      
      * fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
      
      * Update examples/dreambooth/train_dreambooth_flux.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * fix te2 loading and remove te2 refs from text encoder training
      
      * fix tokenizer_2 initialization
      
      * remove text_encoder training refs from lora script (for now)
      
      * try with vae in bfloat16, fix model hook save
      
      * fix tokenization
      
      * fix static imports
      
      * fix CLIP import
      
      * remove text_encoder training refs (for now) from lora script
      
      * fix minor bug in encode_prompt, add guidance def in lora script, ...
      
      * fix unpack_latents args
      
      * fix license in readme
      
      * add "none" to weighting_scheme options for uniform sampling
      
      * style
      
      * adapt model saving - remove text encoder refs
      
      * adapt model loading - remove text encoder refs
      
      * initial commit for readme
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix vae casting
      
      * remove precondition_outputs
      
      * readme
      
      * readme
      
      * style
      
      * readme
      
      * readme
      
      * update weighting scheme default & docs
      
      * style
      
      * add text_encoder training to lora script, change vae_scale_factor value in both
      
      * style
      
      * text encoder training fixes
      
      * style
      
      * update readme
      
      * minor fixes
      
      * fix te params
      
      * fix te params
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65e30907
  13. 08 Aug, 2024 2 commits