1. 25 Oct, 2024 1 commit
  2. 21 Oct, 2024 1 commit
  3. 17 Oct, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux] Add advanced training script + support textual inversion inference (#9434) · 9a7f8246
      Linoy Tsaban authored
      * add ostris trainer to README & add cache latents of vae
      
      * add ostris trainer to README & add cache latents of vae
      
      * style
      
      * readme
      
      * add test for latent caching
      
      * add ostris noise scheduler
      https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95
      
      * style
      
      * fix import
      
      * style
      
      * fix tests
      
      * style
      
      * --change upcasting of transformer?
      
      * update readme according to main
      
      * add pivotal tuning for CLIP
      
      * fix imports, encode_prompt call,add TextualInversionLoaderMixin to FluxPipeline for inference
      
      * TextualInversionLoaderMixin support for FluxPipeline for inference
      
      * move changes to advanced flux script, revert canonical
      
      * add latent caching to canonical script
      
      * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
      
      * revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
      
      * style
      
      * remove redundant line and change code block placement to align with logic
      
      * add initializer_token arg
      
      * add transformer frac for range support from pure textual inversion to the orig pivotal tuning
      
      * support pure textual inversion - wip
      
      * adjustments to support pure textual inversion and transformer optimization in only part of the epochs
      
      * fix logic when using initializer token
      
      * fix pure_textual_inversion_condition
      
      * fix ti/pivotal loading of last validation run
      
      * remove embeddings loading for ti in final training run (to avoid adding huggingface hub dependency)
      
      * support pivotal for t5
      
      * adapt pivotal for T5 encoder
      
      * adapt pivotal for T5 encoder and support in flux pipeline
      
      * t5 pivotal support + support fo pivotal for clip only or both
      
      * fix param chaining
      
      * fix param chaining
      
      * README first draft
      
      * readme
      
      * readme
      
      * readme
      
      * style
      
      * fix import
      
      * style
      
      * add fix from https://github.com/huggingface/diffusers/pull/9419
      
      
      
      * add to readme, change function names
      
      * te lr changes
      
      * readme
      
      * change concept tokens logic
      
      * fix indices
      
      * change arg name
      
      * style
      
      * dummy test
      
      * revert dummy test
      
      * reorder pivoting
      
      * add warning in case the token abstraction is not the instance prompt
      
      * experimental - wip - specific block training
      
      * fix documentation and token abstraction processing
      
      * remove transformer block specification feature (for now)
      
      * style
      
      * fix copies
      
      * fix indexing issue when --initializer_concept has different amounts
      
      * add if TextualInversionLoaderMixin to all flux pipelines
      
      * style
      
      * fix import
      
      * fix imports
      
      * address review comments - remove necessary prints & comments, use pin_memory=True, use free_memory utils, unify warning and prints
      
      * style
      
      * logger info fix
      
      * make lora target modules configurable and change the default
      
      * make lora target modules configurable and change the default
      
      * style
      
      * make lora target modules configurable and change the default, add notes to readme
      
      * style
      
      * add tests
      
      * style
      
      * fix repo id
      
      * add updated requirements for advanced flux
      
      * fix indices of t5 pivotal tuning embeddings
      
      * fix path in test
      
      * remove `pin_memory`
      
      * fix filename of embedding
      
      * fix filename of embedding
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      9a7f8246
  4. 28 Aug, 2024 1 commit
  5. 27 Aug, 2024 1 commit
  6. 23 Aug, 2024 1 commit
  7. 21 Aug, 2024 1 commit
  8. 19 Aug, 2024 1 commit
  9. 16 Aug, 2024 1 commit
  10. 05 Aug, 2024 1 commit
    • Sayak Paul's avatar
      [FLUX] support LoRA (#9057) · fc6a91e3
      Sayak Paul authored
      * feat: lora support for Flux.
      
      add tests
      
      fix imports
      
      major fixes.
      
      * fix
      
      fixes
      
      final fixes?
      
      * fix
      
      * remove is_peft_available.
      fc6a91e3
  11. 04 Aug, 2024 1 commit
  12. 02 Aug, 2024 1 commit
    • Sayak Paul's avatar
      [Flux] allow tests to run (#9050) · 0e460675
      Sayak Paul authored
      * fix tests
      
      * fix
      
      * float64 skip
      
      * remove sample_size.
      
      * remove
      
      * remove more
      
      * default_sample_size.
      
      * credit black forest for flux model.
      
      * skip
      
      * fix: tests
      
      * remove OriginalModelMixin
      
      * add transformer model test
      
      * add: transformer model tests
      0e460675
  13. 01 Aug, 2024 1 commit
  14. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  15. 25 Jul, 2024 3 commits
  16. 03 Jul, 2024 2 commits
  17. 21 Jun, 2024 1 commit
  18. 18 Jun, 2024 2 commits
  19. 12 Jun, 2024 1 commit