1. 08 May, 2025 3 commits
  2. 07 May, 2025 2 commits
    • YiYi Xu's avatar
      clean up the __Init__ for stable_diffusion (#11500) · 53bd367b
      YiYi Xu authored
      up
      53bd367b
    • Aryan's avatar
      Cosmos (#10660) · 7b904941
      Aryan authored
      
      
      * begin transformer conversion
      
      * refactor
      
      * refactor
      
      * refactor
      
      * refactor
      
      * refactor
      
      * refactor
      
      * update
      
      * add conversion script
      
      * add pipeline
      
      * make fix-copies
      
      * remove einops
      
      * update docs
      
      * gradient checkpointing
      
      * add transformer test
      
      * update
      
      * debug
      
      * remove prints
      
      * match sigmas
      
      * add vae pt. 1
      
      * finish CV* vae
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * make fix-copies
      
      * update
      
      * make fix-copies
      
      * fix
      
      * update
      
      * update
      
      * make fix-copies
      
      * update
      
      * update tests
      
      * handle device and dtype for safety checker; required in latest diffusers
      
      * remove enable_gqa and use repeat_interleave instead
      
      * enforce safety checker; use dummy checker in fast tests
      
      * add review suggestion for ONNX export
      Co-Authored-By: default avatarAsfiya Baig <asfiyab@nvidia.com>
      
      * fix safety_checker issues when not passed explicitly
      
      We could either do what's done in this commit, or update the Cosmos examples to explicitly pass the safety checker
      
      * use cosmos guardrail package
      
      * auto format docs
      
      * update conversion script to support 14B models
      
      * update name CosmosPipeline -> CosmosTextToWorldPipeline
      
      * update docs
      
      * fix docs
      
      * fix group offload test failing for vae
      
      ---------
      Co-authored-by: default avatarAsfiya Baig <asfiyab@nvidia.com>
      7b904941
  3. 06 May, 2025 1 commit
  4. 01 May, 2025 1 commit
  5. 30 Apr, 2025 1 commit
  6. 28 Apr, 2025 1 commit
  7. 24 Apr, 2025 2 commits
  8. 23 Apr, 2025 2 commits
  9. 22 Apr, 2025 2 commits
    • YiYi Xu's avatar
      [HiDream] move deprecation to 0.35.0 (#11384) · 448c72a2
      YiYi Xu authored
      up
      448c72a2
    • Linoy Tsaban's avatar
      [LoRA] add LoRA support to HiDream and fine-tuning script (#11281) · e30d3bf5
      Linoy Tsaban authored
      
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * move prompt embeds, pooled embeds outside
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * fix import
      
      * fix import and tokenizer 4, text encoder 4 loading
      
      * te
      
      * prompt embeds
      
      * fix naming
      
      * shapes
      
      * initial commit to add HiDreamImageLoraLoaderMixin
      
      * fix init
      
      * add tests
      
      * loader
      
      * fix model input
      
      * add code example to readme
      
      * fix default max length of text encoders
      
      * prints
      
      * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
      
      * smol fix
      
      * unpatchify
      
      * unpatchify
      
      * fix validation
      
      * flip pred and loss
      
      * fix shift!!!
      
      * revert unpatchify changes (for now)
      
      * smol fix
      
      * Apply style fixes
      
      * workaround moe training
      
      * workaround moe training
      
      * remove prints
      
      * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
      https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207
      
      
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * add support for cpu offloading of text encoders
      
      * Apply style fixes
      
      * adjust lr and rank for train example
      
      * fix copies
      
      * Apply style fixes
      
      * update README
      
      * update README
      
      * update README
      
      * fix license
      
      * keep prompt2,3,4 as None in validation
      
      * remove reverse ode comment
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * vae offload change
      
      * fix text encoder offloading
      
      * Apply style fixes
      
      * cleaner to_kwargs
      
      * fix module name in copied from
      
      * add requirements
      
      * fix offloading
      
      * fix offloading
      
      * fix offloading
      
      * update transformers version in reqs
      
      * try AutoTokenizer
      
      * try AutoTokenizer
      
      * Apply style fixes
      
      * empty commit
      
      * Delete tests/lora/test_lora_layers_hidream.py
      
      * change tokenizer_4 to load with AutoTokenizer as well
      
      * make text_encoder_four and tokenizer_four configurable
      
      * save model card
      
      * save model card
      
      * revert T5
      
      * fix test
      
      * remove non diffusers lumina2 conversion
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      e30d3bf5
  10. 21 Apr, 2025 3 commits
  11. 18 Apr, 2025 2 commits
  12. 17 Apr, 2025 1 commit
  13. 16 Apr, 2025 2 commits
  14. 15 Apr, 2025 3 commits
  15. 14 Apr, 2025 1 commit
  16. 13 Apr, 2025 1 commit
    • Ishan Modi's avatar
      [ControlNet] Adds controlnet for SanaTransformer (#11040) · f1f38ffb
      Ishan Modi authored
      
      
      * added controlnet for sana transformer
      
      * improve code quality
      
      * addressed PR comments
      
      * bug fixes
      
      * added test cases
      
      * update
      
      * added dummy objects
      
      * addressed PR comments
      
      * update
      
      * Forcing update
      
      * add to docs
      
      * code quality
      
      * addressed PR comments
      
      * addressed PR comments
      
      * update
      
      * addressed PR comments
      
      * added proper styling
      
      * update
      
      * Revert "added proper styling"
      
      This reverts commit 344ee8a7014ada095b295034ef84341f03b0e359.
      
      * manually ordered
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarAryan <contact.aryanvs@gmail.com>
      f1f38ffb
  17. 11 Apr, 2025 1 commit
  18. 10 Apr, 2025 1 commit
  19. 09 Apr, 2025 4 commits
  20. 08 Apr, 2025 1 commit
  21. 07 Apr, 2025 1 commit
  22. 04 Apr, 2025 3 commits
    • Tolga Cangöz's avatar
      [LTX0.9.5] Refactor `LTXConditionPipeline` for text-only conditioning (#11174) · 13e48492
      Tolga Cangöz authored
      * Refactor `LTXConditionPipeline` to add text-only conditioning
      
      * style
      
      * up
      
      * Refactor `LTXConditionPipeline` to streamline condition handling and improve clarity
      
      * Improve condition checks
      
      * Simplify latents handling based on conditioning type
      
      * Refactor rope_interpolation_scale preparation for clarity and efficiency
      
      * Update LTXConditionPipeline docstring to clarify supported input types
      
      * Add LTX Video 0.9.5 model to documentation
      
      * Clarify documentation to indicate support for text-only conditioning without passing `conditions`
      
      * refactor: comment out unused parameters in LTXConditionPipeline
      
      * fix: restore previously commented parameters in LTXConditionPipeline
      
      * fix: remove unused parameters from LTXConditionPipeline
      
      * refactor: remove unnecessary lines in LTXConditionPipeline
      13e48492
    • Suprhimp's avatar
      [feat]Add strength in flux_fill pipeline (denoising strength for fluxfill) (#10603) · 94f2c48d
      Suprhimp authored
      * [feat]add strength in flux_fill pipeline
      
      * Update src/diffusers/pipelines/flux/pipeline_flux_fill.py
      
      * Update src/diffusers/pipelines/flux/pipeline_flux_fill.py
      
      * Update src/diffusers/pipelines/flux/pipeline_flux_fill.py
      
      * [refactor] refactor after review
      
      * [fix] change comment
      
      * Apply style fixes
      
      * empty
      
      * fix
      
      * update prepare_latents from flux.img2img pipeline
      
      * style
      
      * Update src/diffusers/pipelines/flux/pipeline_flux_fill.py
      
      ---------
      94f2c48d
    • Kenneth Gerald Hamilton's avatar
      Fixed requests.get function call by adding timeout parameter. (#11156) · f10775b1
      Kenneth Gerald Hamilton authored
      
      
      * Fixed requests.get function call by adding timeout parameter.
      
      * declare DIFFUSERS_REQUEST_TIMEOUT in constants and import when needed
      
      * remove unneeded os import
      
      * Apply style fixes
      
      ---------
      Co-authored-by: default avatarSai-Suraj-27 <sai.suraj.27.729@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      f10775b1
  23. 03 Apr, 2025 1 commit