1. 02 Jun, 2025 1 commit
    • Steven Liu's avatar
      [docs] Model cards (#11112) · c9347206
      Steven Liu authored
      * initial
      
      * update
      
      * hunyuanvideo
      
      * ltx
      
      * fix
      
      * wan
      
      * gen guide
      
      * feedback
      
      * feedback
      
      * pipeline-level quant config
      
      * feedback
      
      * ltx
      c9347206
  2. 22 Apr, 2025 1 commit
    • Linoy Tsaban's avatar
      [LoRA] add LoRA support to HiDream and fine-tuning script (#11281) · e30d3bf5
      Linoy Tsaban authored
      
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * move prompt embeds, pooled embeds outside
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * fix import
      
      * fix import and tokenizer 4, text encoder 4 loading
      
      * te
      
      * prompt embeds
      
      * fix naming
      
      * shapes
      
      * initial commit to add HiDreamImageLoraLoaderMixin
      
      * fix init
      
      * add tests
      
      * loader
      
      * fix model input
      
      * add code example to readme
      
      * fix default max length of text encoders
      
      * prints
      
      * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
      
      * smol fix
      
      * unpatchify
      
      * unpatchify
      
      * fix validation
      
      * flip pred and loss
      
      * fix shift!!!
      
      * revert unpatchify changes (for now)
      
      * smol fix
      
      * Apply style fixes
      
      * workaround moe training
      
      * workaround moe training
      
      * remove prints
      
      * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
      https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207
      
      
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * add support for cpu offloading of text encoders
      
      * Apply style fixes
      
      * adjust lr and rank for train example
      
      * fix copies
      
      * Apply style fixes
      
      * update README
      
      * update README
      
      * update README
      
      * fix license
      
      * keep prompt2,3,4 as None in validation
      
      * remove reverse ode comment
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * vae offload change
      
      * fix text encoder offloading
      
      * Apply style fixes
      
      * cleaner to_kwargs
      
      * fix module name in copied from
      
      * add requirements
      
      * fix offloading
      
      * fix offloading
      
      * fix offloading
      
      * update transformers version in reqs
      
      * try AutoTokenizer
      
      * try AutoTokenizer
      
      * Apply style fixes
      
      * empty commit
      
      * Delete tests/lora/test_lora_layers_hidream.py
      
      * change tokenizer_4 to load with AutoTokenizer as well
      
      * make text_encoder_four and tokenizer_four configurable
      
      * save model card
      
      * save model card
      
      * revert T5
      
      * fix test
      
      * remove non diffusers lumina2 conversion
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      e30d3bf5
  3. 17 Apr, 2025 1 commit
  4. 15 Apr, 2025 1 commit
    • Hameer Abbasi's avatar
      [LoRA] Add LoRA support to AuraFlow (#10216) · 9352a5ca
      Hameer Abbasi authored
      
      
      * Add AuraFlowLoraLoaderMixin
      
      * Add comments, remove qkv fusion
      
      * Add Tests
      
      * Add AuraFlowLoraLoaderMixin to documentation
      
      * Add Suggested changes
      
      * Change attention_kwargs->joint_attention_kwargs
      
      * Rebasing derp.
      
      * fix
      
      * fix
      
      * Quality fixes.
      
      * make style
      
      * `make fix-copies`
      
      * `ruff check --fix`
      
      * Attept 1 to fix tests.
      
      * Attept 2 to fix tests.
      
      * Attept 3 to fix tests.
      
      * Address review comments.
      
      * Rebasing derp.
      
      * Get more tests passing by copying from Flux. Address review comments.
      
      * `joint_attention_kwargs`->`attention_kwargs`
      
      * Add `lora_scale` property for te LoRAs.
      
      * Make test better.
      
      * Remove useless property.
      
      * Skip TE-only tests for AuraFlow.
      
      * Support LoRA for non-CLIP TEs.
      
      * Restore LoRA tests.
      
      * Undo adding LoRA support for non-CLIP TEs.
      
      * Undo support for TE in AuraFlow LoRA.
      
      * `make fix-copies`
      
      * Sync with upstream changes.
      
      * Remove unneeded stuff.
      
      * Mirror `Lumina2`.
      
      * Skip for MPS.
      
      * Address review comments.
      
      * Remove duplicated code.
      
      * Remove unnecessary code.
      
      * Remove repeated docs.
      
      * Propagate attention.
      
      * Fix TE target modules.
      
      * MPS fix for LoRA tests.
      
      * Unrelated TE LoRA tests fix.
      
      * Fix AuraFlow LoRA tests by applying to the right denoiser layers.
      Co-authored-by: default avatarAstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com>
      
      * Apply style fixes
      
      * empty commit
      
      * Fix the repo consistency issues.
      
      * Remove unrelated changes.
      
      * Style.
      
      * Fix `test_lora_fuse_nan`.
      
      * fix quality issues.
      
      * `pytest.xfail` -> `ValueError`.
      
      * Add back `skip_mps`.
      
      * Apply style fixes
      
      * `make fix-copies`
      
      ---------
      Co-authored-by: default avatarWarlord-K <warlordk28@gmail.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarAstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      9352a5ca
  5. 20 Feb, 2025 1 commit
  6. 18 Feb, 2025 1 commit
  7. 16 Dec, 2024 1 commit
  8. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  9. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  10. 03 Jul, 2024 2 commits
  11. 08 Feb, 2024 1 commit
  12. 20 Nov, 2023 1 commit
    • Steven Liu's avatar
      [docs] Loader APIs (#5813) · 7457aa67
      Steven Liu authored
      * first draft
      
      * remove old loader doc
      
      * start adding lora code examples
      
      * finish
      
      * add link to loralinearlayer
      
      * feedback
      
      * fix
      7457aa67