1. 08 May, 2025 1 commit
  2. 05 May, 2025 1 commit
  3. 01 May, 2025 1 commit
  4. 28 Apr, 2025 1 commit
  5. 24 Apr, 2025 1 commit
    • Linoy Tsaban's avatar
      [HiDream LoRA] optimizations + small updates (#11381) · edd78804
      Linoy Tsaban authored
      
      
      * 1. add pre-computation of prompt embeddings when custom prompts are used as well
      2. save model card even if model is not pushed to hub
      3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config)
      4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs
      
      * pre encode validation prompt as well
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * pre encode validation prompt as well
      
      * Apply style fixes
      
      * empty commit
      
      * change default trained modules
      
      * empty commit
      
      * address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way)
      
      * Apply style fixes
      
      * empty commit
      
      * fix validation_embeddings definition
      
      * fix final inference condition
      
      * fix pipeline deletion in last inference
      
      * Apply style fixes
      
      * empty commit
      
      * layers
      
      * remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons
      
      * smol fix
      
      * empty commit
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      edd78804
  6. 22 Apr, 2025 1 commit
    • Linoy Tsaban's avatar
      [LoRA] add LoRA support to HiDream and fine-tuning script (#11281) · e30d3bf5
      Linoy Tsaban authored
      
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * move prompt embeds, pooled embeds outside
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * fix import
      
      * fix import and tokenizer 4, text encoder 4 loading
      
      * te
      
      * prompt embeds
      
      * fix naming
      
      * shapes
      
      * initial commit to add HiDreamImageLoraLoaderMixin
      
      * fix init
      
      * add tests
      
      * loader
      
      * fix model input
      
      * add code example to readme
      
      * fix default max length of text encoders
      
      * prints
      
      * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
      
      * smol fix
      
      * unpatchify
      
      * unpatchify
      
      * fix validation
      
      * flip pred and loss
      
      * fix shift!!!
      
      * revert unpatchify changes (for now)
      
      * smol fix
      
      * Apply style fixes
      
      * workaround moe training
      
      * workaround moe training
      
      * remove prints
      
      * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
      https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207
      
      
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * add support for cpu offloading of text encoders
      
      * Apply style fixes
      
      * adjust lr and rank for train example
      
      * fix copies
      
      * Apply style fixes
      
      * update README
      
      * update README
      
      * update README
      
      * fix license
      
      * keep prompt2,3,4 as None in validation
      
      * remove reverse ode comment
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * vae offload change
      
      * fix text encoder offloading
      
      * Apply style fixes
      
      * cleaner to_kwargs
      
      * fix module name in copied from
      
      * add requirements
      
      * fix offloading
      
      * fix offloading
      
      * fix offloading
      
      * update transformers version in reqs
      
      * try AutoTokenizer
      
      * try AutoTokenizer
      
      * Apply style fixes
      
      * empty commit
      
      * Delete tests/lora/test_lora_layers_hidream.py
      
      * change tokenizer_4 to load with AutoTokenizer as well
      
      * make text_encoder_four and tokenizer_four configurable
      
      * save model card
      
      * save model card
      
      * revert T5
      
      * fix test
      
      * remove non diffusers lumina2 conversion
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      e30d3bf5
  7. 15 Apr, 2025 1 commit
  8. 09 Apr, 2025 1 commit
  9. 04 Mar, 2025 1 commit
  10. 20 Feb, 2025 1 commit
  11. 06 Feb, 2025 1 commit
    • Leo Jiang's avatar
      [bugfix] NPU Adaption for Sana (#10724) · cd0a4a82
      Leo Jiang authored
      
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * NPU Adaption for Sanna
      
      * [bugfix]NPU Adaption for Sanna
      
      ---------
      Co-authored-by: default avatarJ石页 <jiangshuo9@h-partners.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      cd0a4a82
  12. 24 Jan, 2025 1 commit
  13. 21 Jan, 2025 1 commit
  14. 15 Jan, 2025 1 commit
  15. 23 Dec, 2024 2 commits
  16. 18 Dec, 2024 1 commit
    • Sayak Paul's avatar
      [LoRA] feat: lora support for SANA. (#10234) · 9408aa2d
      Sayak Paul authored
      
      
      * feat: lora support for SANA.
      
      * make fix-copies
      
      * rename test class.
      
      * attention_kwargs -> cross_attention_kwargs.
      
      * Revert "attention_kwargs -> cross_attention_kwargs."
      
      This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b.
      
      * exhaust 119 max line limit
      
      * sana lora fine-tuning script.
      
      * readme
      
      * add a note about the supported models.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      
      * style
      
      * docs for attention_kwargs.
      
      * remove lora_scale from pag pipeline.
      
      * copy fix
      
      ---------
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      9408aa2d
  17. 19 Nov, 2024 1 commit
  18. 01 Nov, 2024 2 commits
  19. 31 Oct, 2024 1 commit
  20. 28 Oct, 2024 2 commits
  21. 25 Oct, 2024 1 commit
  22. 22 Oct, 2024 1 commit
  23. 15 Oct, 2024 1 commit
  24. 28 Sep, 2024 1 commit
  25. 15 Sep, 2024 1 commit
  26. 14 Sep, 2024 1 commit
  27. 11 Sep, 2024 1 commit
  28. 14 Aug, 2024 1 commit
  29. 12 Aug, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux Dreambooth LoRA] - te bug fixes & updates (#9139) · 413ca29b
      Linoy Tsaban authored
      * add requirements + fix link to bghira's guide
      
      * text ecnoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * style
      
      * add tests
      
      * fix encode_prompt call
      
      * style
      
      * unpack_latents test
      
      * fix lora saving
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * style
      
      * testing
      
      * style
      
      * testing
      
      * testing
      
      * style
      
      * fix sizing issue
      
      * style
      
      * revert scaling
      
      * style
      
      * style
      
      * scaling test
      
      * style
      
      * scaling test
      
      * remove model pred operation left from pre-conditioning
      
      * remove model pred operation left from pre-conditioning
      
      * fix trainable params
      
      * remove te2 from casting
      
      * transformer to accelerator
      
      * remove prints
      
      * empty commit
      413ca29b
  30. 09 Aug, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux] Dreambooth LoRA training scripts (#9086) · 65e30907
      Linoy Tsaban authored
      
      
      * initial commit - dreambooth for flux
      
      * update transformer to be FluxTransformer2DModel
      
      * update training loop and validation inference
      
      * fix sd3->flux docs
      
      * add guidance handling, not sure if it makes sense(?)
      
      * inital dreambooth lora commit
      
      * fix text_ids in compute_text_embeddings
      
      * fix imports of static methods
      
      * fix pipeline loading in readme, remove auto1111 docs for now
      
      * fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
      
      * Update examples/dreambooth/train_dreambooth_flux.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * fix te2 loading and remove te2 refs from text encoder training
      
      * fix tokenizer_2 initialization
      
      * remove text_encoder training refs from lora script (for now)
      
      * try with vae in bfloat16, fix model hook save
      
      * fix tokenization
      
      * fix static imports
      
      * fix CLIP import
      
      * remove text_encoder training refs (for now) from lora script
      
      * fix minor bug in encode_prompt, add guidance def in lora script, ...
      
      * fix unpack_latents args
      
      * fix license in readme
      
      * add "none" to weighting_scheme options for uniform sampling
      
      * style
      
      * adapt model saving - remove text encoder refs
      
      * adapt model loading - remove text encoder refs
      
      * initial commit for readme
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix vae casting
      
      * remove precondition_outputs
      
      * readme
      
      * readme
      
      * style
      
      * readme
      
      * readme
      
      * update weighting scheme default & docs
      
      * style
      
      * add text_encoder training to lora script, change vae_scale_factor value in both
      
      * style
      
      * text encoder training fixes
      
      * style
      
      * update readme
      
      * minor fixes
      
      * fix te params
      
      * fix te params
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65e30907
  31. 21 Jul, 2024 1 commit
  32. 05 Jul, 2024 1 commit
  33. 24 Jun, 2024 1 commit
  34. 20 Jun, 2024 1 commit
  35. 19 Jun, 2024 1 commit
  36. 18 Jun, 2024 2 commits