1. 01 Jul, 2025 1 commit
  2. 30 Jun, 2025 2 commits
  3. 28 Jun, 2025 1 commit
  4. 27 Jun, 2025 1 commit
  5. 25 Jun, 2025 1 commit
  6. 24 Jun, 2025 2 commits
  7. 19 Jun, 2025 2 commits
  8. 17 Jun, 2025 1 commit
  9. 16 Jun, 2025 1 commit
  10. 14 Jun, 2025 1 commit
    • Edna's avatar
      Chroma Pipeline (#11698) · 8adc6003
      Edna authored
      
      
      * working state from hameerabbasi and iddl
      
      * working state form hameerabbasi and iddl (transformer)
      
      * working state (normalization)
      
      * working state (embeddings)
      
      * add chroma loader
      
      * add chroma to mappings
      
      * add chroma to transformer init
      
      * take out variant stuff
      
      * get decently far in changing variant stuff
      
      * add chroma init
      
      * make chroma output class
      
      * add chroma transformer to dummy tp
      
      * add chroma to init
      
      * add chroma to init
      
      * fix single file
      
      * update
      
      * update
      
      * add chroma to auto pipeline
      
      * add chroma to pipeline init
      
      * change to chroma transformer
      
      * take out variant from blocks
      
      * swap embedder location
      
      * remove prompt_2
      
      * work on swapping text encoders
      
      * remove mask function
      
      * dont modify mask (for now)
      
      * wrap attn mask
      
      * no attn mask (can't get it to work)
      
      * remove pooled prompt embeds
      
      * change to my own unpooled embeddeer
      
      * fix load
      
      * take pooled projections out of transformer
      
      * ensure correct dtype for chroma embeddings
      
      * update
      
      * use dn6 attn mask + fix true_cfg_scale
      
      * use chroma pipeline output
      
      * use DN6 embeddings
      
      * remove guidance
      
      * remove guidance embed (pipeline)
      
      * remove guidance from embeddings
      
      * don't return length
      
      * dont change dtype
      
      * remove unused stuff, fix up docs
      
      * add chroma autodoc
      
      * add .md (oops)
      
      * initial chroma docs
      
      * undo don't change dtype
      
      * undo arxiv change
      
      unsure why that happened
      
      * fix hf papers regression in more places
      
      * Update docs/source/en/api/pipelines/chroma.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * do_cfg -> self.do_classifier_free_guidance
      
      * Update docs/source/en/api/models/chroma_transformer.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update chroma.md
      
      * Move chroma layers into transformer
      
      * Remove pruned AdaLayerNorms
      
      * Add chroma fast tests
      
      * (untested) batch cond and uncond
      
      * Add # Copied from for shift
      
      * Update # Copied from statements
      
      * update norm imports
      
      * Revert cond + uncond batching
      
      * Add transformer tests
      
      * move chroma test (oops)
      
      * chroma init
      
      * fix chroma pipeline fast tests
      
      * Update src/diffusers/models/transformers/transformer_chroma.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Move Approximator and Embeddings
      
      * Fix auto pipeline + make style, quality
      
      * make style
      
      * Apply style fixes
      
      * switch to new input ids
      
      * fix # Copied from error
      
      * remove # Copied from on protected members
      
      * try to fix import
      
      * fix import
      
      * make fix-copes
      
      * revert style fix
      
      * update chroma transformer params
      
      * update chroma transformer approximator init params
      
      * update to pad tokens
      
      * fix batch inference
      
      * Make more pipeline tests work
      
      * Make most transformer tests work
      
      * fix docs
      
      * make style, make quality
      
      * skip batch tests
      
      * fix test skipping
      
      * fix test skipping again
      
      * fix for tests
      
      * Fix all pipeline test
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * Revert "fix equal size list input"
      
      This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c.
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      8adc6003
  11. 13 Jun, 2025 2 commits
  12. 11 Jun, 2025 1 commit
  13. 06 Jun, 2025 1 commit
    • Aryan's avatar
      Wan VACE (#11582) · 73a9d585
      Aryan authored
      * initial support
      
      * make fix-copies
      
      * fix no split modules
      
      * add conversion script
      
      * refactor
      
      * add pipeline test
      
      * refactor
      
      * fix bug with mask
      
      * fix for reference images
      
      * remove print
      
      * update docs
      
      * update slices
      
      * update
      
      * update
      
      * update example
      73a9d585
  14. 30 May, 2025 1 commit
  15. 27 May, 2025 1 commit
  16. 22 May, 2025 1 commit
  17. 20 May, 2025 1 commit
  18. 19 May, 2025 4 commits
  19. 15 May, 2025 1 commit
  20. 13 May, 2025 2 commits
  21. 09 May, 2025 1 commit
  22. 06 May, 2025 2 commits
  23. 01 May, 2025 1 commit
  24. 28 Apr, 2025 1 commit
  25. 23 Apr, 2025 1 commit
  26. 22 Apr, 2025 1 commit
    • Linoy Tsaban's avatar
      [LoRA] add LoRA support to HiDream and fine-tuning script (#11281) · e30d3bf5
      Linoy Tsaban authored
      
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * move prompt embeds, pooled embeds outside
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * fix import
      
      * fix import and tokenizer 4, text encoder 4 loading
      
      * te
      
      * prompt embeds
      
      * fix naming
      
      * shapes
      
      * initial commit to add HiDreamImageLoraLoaderMixin
      
      * fix init
      
      * add tests
      
      * loader
      
      * fix model input
      
      * add code example to readme
      
      * fix default max length of text encoders
      
      * prints
      
      * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
      
      * smol fix
      
      * unpatchify
      
      * unpatchify
      
      * fix validation
      
      * flip pred and loss
      
      * fix shift!!!
      
      * revert unpatchify changes (for now)
      
      * smol fix
      
      * Apply style fixes
      
      * workaround moe training
      
      * workaround moe training
      
      * remove prints
      
      * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
      https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207
      
      
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * add support for cpu offloading of text encoders
      
      * Apply style fixes
      
      * adjust lr and rank for train example
      
      * fix copies
      
      * Apply style fixes
      
      * update README
      
      * update README
      
      * update README
      
      * fix license
      
      * keep prompt2,3,4 as None in validation
      
      * remove reverse ode comment
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * vae offload change
      
      * fix text encoder offloading
      
      * Apply style fixes
      
      * cleaner to_kwargs
      
      * fix module name in copied from
      
      * add requirements
      
      * fix offloading
      
      * fix offloading
      
      * fix offloading
      
      * update transformers version in reqs
      
      * try AutoTokenizer
      
      * try AutoTokenizer
      
      * Apply style fixes
      
      * empty commit
      
      * Delete tests/lora/test_lora_layers_hidream.py
      
      * change tokenizer_4 to load with AutoTokenizer as well
      
      * make text_encoder_four and tokenizer_four configurable
      
      * save model card
      
      * save model card
      
      * revert T5
      
      * fix test
      
      * remove non diffusers lumina2 conversion
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      e30d3bf5
  27. 17 Apr, 2025 1 commit
  28. 16 Apr, 2025 2 commits
  29. 15 Apr, 2025 1 commit
    • Hameer Abbasi's avatar
      [LoRA] Add LoRA support to AuraFlow (#10216) · 9352a5ca
      Hameer Abbasi authored
      
      
      * Add AuraFlowLoraLoaderMixin
      
      * Add comments, remove qkv fusion
      
      * Add Tests
      
      * Add AuraFlowLoraLoaderMixin to documentation
      
      * Add Suggested changes
      
      * Change attention_kwargs->joint_attention_kwargs
      
      * Rebasing derp.
      
      * fix
      
      * fix
      
      * Quality fixes.
      
      * make style
      
      * `make fix-copies`
      
      * `ruff check --fix`
      
      * Attept 1 to fix tests.
      
      * Attept 2 to fix tests.
      
      * Attept 3 to fix tests.
      
      * Address review comments.
      
      * Rebasing derp.
      
      * Get more tests passing by copying from Flux. Address review comments.
      
      * `joint_attention_kwargs`->`attention_kwargs`
      
      * Add `lora_scale` property for te LoRAs.
      
      * Make test better.
      
      * Remove useless property.
      
      * Skip TE-only tests for AuraFlow.
      
      * Support LoRA for non-CLIP TEs.
      
      * Restore LoRA tests.
      
      * Undo adding LoRA support for non-CLIP TEs.
      
      * Undo support for TE in AuraFlow LoRA.
      
      * `make fix-copies`
      
      * Sync with upstream changes.
      
      * Remove unneeded stuff.
      
      * Mirror `Lumina2`.
      
      * Skip for MPS.
      
      * Address review comments.
      
      * Remove duplicated code.
      
      * Remove unnecessary code.
      
      * Remove repeated docs.
      
      * Propagate attention.
      
      * Fix TE target modules.
      
      * MPS fix for LoRA tests.
      
      * Unrelated TE LoRA tests fix.
      
      * Fix AuraFlow LoRA tests by applying to the right denoiser layers.
      Co-authored-by: default avatarAstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com>
      
      * Apply style fixes
      
      * empty commit
      
      * Fix the repo consistency issues.
      
      * Remove unrelated changes.
      
      * Style.
      
      * Fix `test_lora_fuse_nan`.
      
      * fix quality issues.
      
      * `pytest.xfail` -> `ValueError`.
      
      * Add back `skip_mps`.
      
      * Apply style fixes
      
      * `make fix-copies`
      
      ---------
      Co-authored-by: default avatarWarlord-K <warlordk28@gmail.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarAstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      9352a5ca
  30. 14 Apr, 2025 1 commit