1. 18 Oct, 2024 1 commit
  2. 17 Oct, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux] Add advanced training script + support textual inversion inference (#9434) · 9a7f8246
      Linoy Tsaban authored
      * add ostris trainer to README & add cache latents of vae
      
      * add ostris trainer to README & add cache latents of vae
      
      * style
      
      * readme
      
      * add test for latent caching
      
      * add ostris noise scheduler
      https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95
      
      * style
      
      * fix import
      
      * style
      
      * fix tests
      
      * style
      
      * --change upcasting of transformer?
      
      * update readme according to main
      
      * add pivotal tuning for CLIP
      
      * fix imports, encode_prompt call,add TextualInversionLoaderMixin to FluxPipeline for inference
      
      * TextualInversionLoaderMixin support for FluxPipeline for inference
      
      * move changes to advanced flux script, revert canonical
      
      * add latent caching to canonical script
      
      * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
      
      * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
      
      * style
      
      * remove redundant line and change code block placement to align with logic
      
      * add initializer_token arg
      
      * add transformer frac for range support from pure textual inversion to the orig pivotal tuning
      
      * support pure textual inversion - wip
      
      * adjustments to support pure textual inversion and transformer optimization in only part of the epochs
      
      * fix logic when using initializer token
      
      * fix pure_textual_inversion_condition
      
      * fix ti/pivotal loading of last validation run
      
      * remove embeddings loading for ti in final training run (to avoid adding huggingface hub dependency)
      
      * support pivotal for t5
      
      * adapt pivotal for T5 encoder
      
      * adapt pivotal for T5 encoder and support in flux pipeline
      
      * t5 pivotal support + support fo pivotal for clip only or both
      
      * fix param chaining
      
      * fix param chaining
      
      * README first draft
      
      * readme
      
      * readme
      
      * readme
      
      * style
      
      * fix import
      
      * style
      
      * add fix from https://github.com/huggingface/diffusers/pull/9419
      
      
      
      * add to readme, change function names
      
      * te lr changes
      
      * readme
      
      * change concept tokens logic
      
      * fix indices
      
      * change arg name
      
      * style
      
      * dummy test
      
      * revert dummy test
      
      * reorder pivoting
      
      * add warning in case the token abstraction is not the instance prompt
      
      * experimental - wip - specific block training
      
      * fix documentation and token abstraction processing
      
      * remove transformer block specification feature (for now)
      
      * style
      
      * fix copies
      
      * fix indexing issue when --initializer_concept has different amounts
      
      * add if TextualInversionLoaderMixin to all flux pipelines
      
      * style
      
      * fix import
      
      * fix imports
      
      * address review comments - remove necessary prints & comments, use pin_memory=True, use free_memory utils, unify warning and prints
      
      * style
      
      * logger info fix
      
      * make lora target modules configurable and change the default
      
      * make lora target modules configurable and change the default
      
      * style
      
      * make lora target modules configurable and change the default, add notes to readme
      
      * style
      
      * add tests
      
      * style
      
      * fix repo id
      
      * add updated requirements for advanced flux
      
      * fix indices of t5 pivotal tuning embeddings
      
      * fix path in test
      
      * remove `pin_memory`
      
      * fix filename of embedding
      
      * fix filename of embedding
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      9a7f8246
  3. 14 Aug, 2024 1 commit
  4. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  5. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  6. 23 Jul, 2024 2 commits
  7. 05 Jul, 2024 1 commit
  8. 03 Jul, 2024 1 commit
  9. 02 Jul, 2024 1 commit
  10. 01 Jul, 2024 1 commit
  11. 27 Jun, 2024 1 commit
    • Linoy Tsaban's avatar
      [Advanced dreambooth lora] adjustments to align with canonical script (#8406) · 35f45ecd
      Linoy Tsaban authored
      
      
      * minor changes
      
      * minor changes
      
      * minor changes
      
      * minor changes
      
      * minor changes
      
      * minor changes
      
      * minor changes
      
      * fix
      
      * fix
      
      * aligning with blora script
      
      * aligning with blora script
      
      * aligning with blora script
      
      * aligning with blora script
      
      * aligning with blora script
      
      * remove prints
      
      * style
      
      * default val
      
      * license
      
      * move save_model_card to outside push_to_hub
      
      * Update train_dreambooth_lora_sdxl_advanced.py
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      35f45ecd
  12. 24 Jun, 2024 2 commits
  13. 13 Jun, 2024 1 commit
  14. 29 May, 2024 2 commits
  15. 20 May, 2024 1 commit
  16. 30 Apr, 2024 1 commit
    • Linoy Tsaban's avatar
      Add B-Lora training option to the advanced dreambooth lora script (#7741) · 26a7851e
      Linoy Tsaban authored
      
      
      * add blora
      
      * add blora
      
      * add blora
      
      * add blora
      
      * little changes
      
      * little changes
      
      * remove redundancies
      
      * fixes
      
      * add B LoRA to readme
      
      * style
      
      * inference
      
      * defaults + path to loras+ generation
      
      * minor changes
      
      * style
      
      * minor changes
      
      * minor changes
      
      * blora arg
      
      * added --lora_unet_blocks
      
      * style
      
      * Update examples/advanced_diffusion_training/README.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * add commit hash to B-LoRA repo cloneing
      
      * change inference, remove cloning
      
      * change inference, remove cloning
      add section about configureable unet blocks
      
      * change inference, remove cloning
      add section about configureable unet blocks
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      26a7851e
  17. 02 Apr, 2024 1 commit
    • Bagheera's avatar
      7529 do not disable autocast for cuda devices (#7530) · 8e963d1c
      Bagheera authored
      
      
      * 7529 do not disable autocast for cuda devices
      
      * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
      
      * add autocast fix to other training examples
      
      * disable native_amp for dreambooth (sdxl)
      
      * disable native_amp for pix2pix (sdxl)
      
      * remove tests from remaining files
      
      * disable native_amp on huggingface accelerator for every training example that uses it
      
      * convert more usages of autocast to nullcontext, make style fixes
      
      * make style fixes
      
      * style.
      
      * Empty-Commit
      
      ---------
      Co-authored-by: default avatarbghira <bghira@users.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e963d1c
  18. 27 Mar, 2024 1 commit
  19. 26 Mar, 2024 1 commit
  20. 18 Mar, 2024 1 commit
  21. 14 Mar, 2024 1 commit
  22. 13 Mar, 2024 1 commit
  23. 06 Mar, 2024 1 commit
  24. 04 Mar, 2024 2 commits
  25. 15 Feb, 2024 1 commit
  26. 09 Feb, 2024 2 commits
  27. 08 Feb, 2024 1 commit
  28. 03 Feb, 2024 1 commit
    • Linoy Tsaban's avatar
      [advanced dreambooth lora sdxl script] new features + bug fixes (#6691) · 65329aed
      Linoy Tsaban authored
      
      
      * add noise_offset param
      
      * micro conditioning - wip
      
      * image processing adjusted and moved to support micro conditioning
      
      * change time ids to be computed inside train loop
      
      * change time ids to be computed inside train loop
      
      * change time ids to be computed inside train loop
      
      * time ids shape fix
      
      * move token replacement of validation prompt to the same section of instance prompt and class prompt
      
      * add offset noise to sd15 advanced script
      
      * fix token loading during validation
      
      * fix token loading during validation in sdxl script
      
      * a little clean
      
      * style
      
      * a little clean
      
      * style
      
      * sdxl script - a little clean + minor path fix
      
      sd 1.5 script - change default resolution value
      
      * ad 1.5 script - minor path fix
      
      * fix missing comma in code example in model card
      
      * clean up commented lines
      
      * style
      
      * remove time ids computed outside training loop - no longer used now that we utilize micro-conditioning, as all time ids are now computed inside the training loop
      
      * style
      
      * [WIP] - added draft readme, building off of examples/dreambooth/README.md
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * removed --crops_coords_top_left from CLI args
      
      * style
      
      * fix missing shape bug due to missing RGB if statement
      
      * add blog mention at the start of the reamde as well
      
      * Update examples/advanced_diffusion_training/README.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * change note to render nicely as well
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65329aed
  29. 24 Jan, 2024 1 commit
    • Brandon Strong's avatar
      SD 1.5 Support For Advanced Lora Training (train_dreambooth_lora_sdxl_advanced.py) (#6449) · 16748d1e
      Brandon Strong authored
      
      
      * sd1.5 support in separate script
      
      A quick adaptation to support people interested in using this method on 1.5 models.
      
      * sd15 prompt text encoding and unet conversions
      
      as per @linoytsaban 's recommendations. Testing would be appreciated,
      
      * Readability and quality improvements
      
      Removed some mentions of SDXL, and some arguments that don't apply to sd 1.5, and cleaned up some comments.
      
      * make style/quality commands
      
      * tracker rename and run-it doc
      
      * Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py
      
      * Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
      16748d1e
  30. 17 Jan, 2024 2 commits
  31. 16 Jan, 2024 1 commit
  32. 05 Jan, 2024 2 commits