1. 07 May, 2025 1 commit
    • Aryan's avatar
      Cosmos (#10660) · 7b904941
      Aryan authored
      
      
      * begin transformer conversion
      
      * refactor
      
      * refactor
      
      * refactor
      
      * refactor
      
      * refactor
      
      * refactor
      
      * update
      
      * add conversion script
      
      * add pipeline
      
      * make fix-copies
      
      * remove einops
      
      * update docs
      
      * gradient checkpointing
      
      * add transformer test
      
      * update
      
      * debug
      
      * remove prints
      
      * match sigmas
      
      * add vae pt. 1
      
      * finish CV* vae
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * make fix-copies
      
      * update
      
      * make fix-copies
      
      * fix
      
      * update
      
      * update
      
      * make fix-copies
      
      * update
      
      * update tests
      
      * handle device and dtype for safety checker; required in latest diffusers
      
      * remove enable_gqa and use repeat_interleave instead
      
      * enforce safety checker; use dummy checker in fast tests
      
      * add review suggestion for ONNX export
      Co-Authored-By: default avatarAsfiya Baig <asfiyab@nvidia.com>
      
      * fix safety_checker issues when not passed explicitly
      
      We could either do what's done in this commit, or update the Cosmos examples to explicitly pass the safety checker
      
      * use cosmos guardrail package
      
      * auto format docs
      
      * update conversion script to support 14B models
      
      * update name CosmosPipeline -> CosmosTextToWorldPipeline
      
      * update docs
      
      * fix docs
      
      * fix group offload test failing for vae
      
      ---------
      Co-authored-by: default avatarAsfiya Baig <asfiyab@nvidia.com>
      7b904941
  2. 06 May, 2025 1 commit
  3. 05 May, 2025 1 commit
  4. 01 May, 2025 2 commits
  5. 30 Apr, 2025 1 commit
  6. 24 Apr, 2025 1 commit
  7. 22 Apr, 2025 3 commits
    • YiYi Xu's avatar
      [HiDream] move deprecation to 0.35.0 (#11384) · 448c72a2
      YiYi Xu authored
      up
      448c72a2
    • Aryan's avatar
      Update modeling imports (#11129) · f108ad88
      Aryan authored
      update
      f108ad88
    • Linoy Tsaban's avatar
      [LoRA] add LoRA support to HiDream and fine-tuning script (#11281) · e30d3bf5
      Linoy Tsaban authored
      
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * initial commit
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * move prompt embeds, pooled embeds outside
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * fix import
      
      * fix import and tokenizer 4, text encoder 4 loading
      
      * te
      
      * prompt embeds
      
      * fix naming
      
      * shapes
      
      * initial commit to add HiDreamImageLoraLoaderMixin
      
      * fix init
      
      * add tests
      
      * loader
      
      * fix model input
      
      * add code example to readme
      
      * fix default max length of text encoders
      
      * prints
      
      * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
      
      * smol fix
      
      * unpatchify
      
      * unpatchify
      
      * fix validation
      
      * flip pred and loss
      
      * fix shift!!!
      
      * revert unpatchify changes (for now)
      
      * smol fix
      
      * Apply style fixes
      
      * workaround moe training
      
      * workaround moe training
      
      * remove prints
      
      * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
      https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207
      
      
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * refactor to align with HiDream refactor
      
      * add support for cpu offloading of text encoders
      
      * Apply style fixes
      
      * adjust lr and rank for train example
      
      * fix copies
      
      * Apply style fixes
      
      * update README
      
      * update README
      
      * update README
      
      * fix license
      
      * keep prompt2,3,4 as None in validation
      
      * remove reverse ode comment
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_hidream.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * vae offload change
      
      * fix text encoder offloading
      
      * Apply style fixes
      
      * cleaner to_kwargs
      
      * fix module name in copied from
      
      * add requirements
      
      * fix offloading
      
      * fix offloading
      
      * fix offloading
      
      * update transformers version in reqs
      
      * try AutoTokenizer
      
      * try AutoTokenizer
      
      * Apply style fixes
      
      * empty commit
      
      * Delete tests/lora/test_lora_layers_hidream.py
      
      * change tokenizer_4 to load with AutoTokenizer as well
      
      * make text_encoder_four and tokenizer_four configurable
      
      * save model card
      
      * save model card
      
      * revert T5
      
      * fix test
      
      * remove non diffusers lumina2 conversion
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      e30d3bf5
  8. 21 Apr, 2025 1 commit
  9. 19 Apr, 2025 1 commit
  10. 18 Apr, 2025 1 commit
  11. 17 Apr, 2025 2 commits
  12. 15 Apr, 2025 3 commits
  13. 14 Apr, 2025 1 commit
  14. 13 Apr, 2025 3 commits
  15. 11 Apr, 2025 2 commits
  16. 09 Apr, 2025 3 commits
  17. 08 Apr, 2025 1 commit
  18. 05 Apr, 2025 1 commit
  19. 02 Apr, 2025 4 commits
  20. 29 Mar, 2025 1 commit
  21. 25 Mar, 2025 1 commit
  22. 24 Mar, 2025 1 commit
    • Aryan's avatar
      New HunyuanVideo-I2V (#11066) · 8907a70a
      Aryan authored
      * update
      
      * update
      
      * update
      
      * add tests
      
      * update docs
      
      * raise value error
      
      * warning for true cfg and guidance scale
      
      * fix test
      8907a70a
  23. 21 Mar, 2025 3 commits
  24. 20 Mar, 2025 1 commit