- 11 May, 2025 1 commit
-
-
Sayak Paul authored
* start. * add tests for framepack transformer model. * merge conflicts. * make to square. * fixes
-
- 09 May, 2025 1 commit
-
-
Aryan authored
update
-
- 08 May, 2025 1 commit
-
-
Aryan authored
fix
-
- 07 May, 2025 1 commit
-
-
Aryan authored
* begin transformer conversion * refactor * refactor * refactor * refactor * refactor * refactor * update * add conversion script * add pipeline * make fix-copies * remove einops * update docs * gradient checkpointing * add transformer test * update * debug * remove prints * match sigmas * add vae pt. 1 * finish CV* vae * update * update * update * update * update * update * make fix-copies * update * make fix-copies * fix * update * update * make fix-copies * update * update tests * handle device and dtype for safety checker; required in latest diffusers * remove enable_gqa and use repeat_interleave instead * enforce safety checker; use dummy checker in fast tests * add review suggestion for ONNX export Co-Authored-By:
Asfiya Baig <asfiyab@nvidia.com> * fix safety_checker issues when not passed explicitly We could either do what's done in this commit, or update the Cosmos examples to explicitly pass the safety checker * use cosmos guardrail package * auto format docs * update conversion script to support 14B models * update name CosmosPipeline -> CosmosTextToWorldPipeline * update docs * fix docs * fix group offload test failing for vae --------- Co-authored-by:
Asfiya Baig <asfiyab@nvidia.com>
-
- 06 May, 2025 1 commit
-
-
Aryan authored
* add transformer * add pipeline * fixes * make fix-copies * update * add flux mu shift * update example snippet * debug * cleanup * batch_size=1 optimization * add pipeline test * fix for model cpu offloading' * add last_image support; credits: https://github.com/lllyasviel/FramePack/pull/167 * update example with flf2v * update penguin url * fix test * address review comment: https://github.com/huggingface/diffusers/pull/11428#discussion_r2071032371 * address review comment: https://github.com/huggingface/diffusers/pull/11428#discussion_r2071087689 * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py --------- Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 05 May, 2025 1 commit
-
-
Connector Switch authored
* implement tiled encode/decode * address review comments
-
- 01 May, 2025 2 commits
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Sayak Paul authored
* [tests] Add torch.compile() test for WanTransformer3DModel * fix wan recompilation issues. * style --------- Co-authored-by:tongyu0924 <winnie920924@gmail.com>
-
- 30 Apr, 2025 1 commit
-
-
Aryan authored
udpate
-
- 24 Apr, 2025 1 commit
-
-
co63oc authored
-
- 22 Apr, 2025 3 commits
-
-
YiYi Xu authored
up
-
Aryan authored
update
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 21 Apr, 2025 1 commit
-
-
OleehyO authored
[cogview4][feat] Support attention mechanism with variable-length support and batch packing (#11349) * [cogview4] Enhance attention mechanism with variable-length support and batch packing --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 19 Apr, 2025 1 commit
-
-
YiYi Xu authored
up
-
- 18 Apr, 2025 1 commit
-
-
YiYi Xu authored
* update transformer --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 17 Apr, 2025 2 commits
-
-
Frank (Haofan) Wang authored
-
YiYi Xu authored
* add
-
- 15 Apr, 2025 3 commits
-
-
AstraliteHeart authored
* Update pe_selection_index_based_on_dim * Make pe_selection_index_based_on_dim work with torh.compile * Fix AuraFlowTransformer2DModel's dpcstring default values --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
hlky authored
-
Hameer Abbasi authored
* Add AuraFlowLoraLoaderMixin * Add comments, remove qkv fusion * Add Tests * Add AuraFlowLoraLoaderMixin to documentation * Add Suggested changes * Change attention_kwargs->joint_attention_kwargs * Rebasing derp. * fix * fix * Quality fixes. * make style * `make fix-copies` * `ruff check --fix` * Attept 1 to fix tests. * Attept 2 to fix tests. * Attept 3 to fix tests. * Address review comments. * Rebasing derp. * Get more tests passing by copying from Flux. Address review comments. * `joint_attention_kwargs`->`attention_kwargs` * Add `lora_scale` property for te LoRAs. * Make test better. * Remove useless property. * Skip TE-only tests for AuraFlow. * Support LoRA for non-CLIP TEs. * Restore LoRA tests. * Undo adding LoRA support for non-CLIP TEs. * Undo support for TE in AuraFlow LoRA. * `make fix-copies` * Sync with upstream changes. * Remove unneeded stuff. * Mirror `Lumina2`. * Skip for MPS. * Address review comments. * Remove duplicated code. * Remove unnecessary code. * Remove repeated docs. * Propagate attention. * Fix TE target modules. * MPS fix for LoRA tests. * Unrelated TE LoRA tests fix. * Fix AuraFlow LoRA tests by applying to the right denoiser layers. Co-authored-by:
AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com> * Apply style fixes * empty commit * Fix the repo consistency issues. * Remove unrelated changes. * Style. * Fix `test_lora_fuse_nan`. * fix quality issues. * `pytest.xfail` -> `ValueError`. * Add back `skip_mps`. * Apply style fixes * `make fix-copies` --------- Co-authored-by:
Warlord-K <warlordk28@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 14 Apr, 2025 1 commit
-
-
hlky authored
-
- 13 Apr, 2025 3 commits
-
-
Ishan Modi authored
* added controlnet for sana transformer * improve code quality * addressed PR comments * bug fixes * added test cases * update * added dummy objects * addressed PR comments * update * Forcing update * add to docs * code quality * addressed PR comments * addressed PR comments * update * addressed PR comments * added proper styling * update * Revert "added proper styling" This reverts commit 344ee8a7014ada095b295034ef84341f03b0e359. * manually ordered * Apply suggestions from code review --------- Co-authored-by:Aryan <contact.aryanvs@gmail.com>
-
Tuna Tuncer authored
-
Aryan authored
* HiDream Image * update * -einops * py3.8 * fix -einops * mixins, offload_seq, option_components * docs * Apply style fixes * trigger tests * Apply suggestions from code review Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * joint_attention_kwargs -> attention_kwargs, fixes * fast tests * -_init_weights * style tests * move reshape logic * update slice
😴 * supports_dduf *🤷 🏻 ♂️ * Update src/diffusers/models/transformers/transformer_hidream_image.py Co-authored-by:Aryan <contact.aryanvs@gmail.com> * address review comments * update tests * doc updates * update * Update src/diffusers/models/transformers/transformer_hidream_image.py * Apply style fixes --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 11 Apr, 2025 2 commits
-
-
hlky authored
* HiDream Image --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Tuna Tuncer authored
-
- 09 Apr, 2025 3 commits
-
-
Ilya Drobyshevskiy authored
Before this if txt_ids was 3d tensor, line with txt_ids[:1] concat txt_ids by batch dim. Now we first check that txt_ids is 2d tensor (or take first batch element) and then concat by token dim
-
Dhruv Nair authored
* update * update * update * update
-
hlky authored
* AutoModel * ... * lol * ... * add test * update * make fix-copies --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 08 Apr, 2025 1 commit
-
-
Sayak Paul authored
* implement record_stream for better performance. * fix * style. * merge #11097 * Update src/diffusers/hooks/group_offloading.py Co-authored-by:
Aryan <aryan@huggingface.co> * fixes * docstring. * remaining todos in low_cpu_mem_usage * tests * updates to docs. --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 05 Apr, 2025 1 commit
-
-
Mikko Tukiainen authored
* Add missing 'gradient_checkpointing = False' attr * Add (limited) tests for Mochi autoencoder * Apply style fixes * pass 'conv_cache' as arg instead of kwarg --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 02 Apr, 2025 4 commits
-
-
Dhruv Nair authored
* update * update * update
-
hlky authored
-
hlky authored
* allow models to run with a user-provided dtype map instead of a single dtype * make style * Add warning, change `_` to `default` * make style * add test * handle shared tensors * remove warning --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Bruno Magalhaes authored
* rewrite memory count without implicitly using dimensions by @ic-synth * replace F.pad by built-in padding in Conv3D * in-place sums to reduce memory allocations * fixed trailing whitespace * file reformatted * in-place sums * simpler in-place expressions * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * reverted change
-
- 29 Mar, 2025 1 commit
-
-
hlky authored
-
- 25 Mar, 2025 1 commit
-
-
Junsong Chen authored
-
- 24 Mar, 2025 1 commit
-
-
Aryan authored
* update * update * update * add tests * update docs * raise value error * warning for true cfg and guidance scale * fix test
-
- 21 Mar, 2025 1 commit
-
-
hlky authored
* Don't use `torch_dtype` when `quantization_config` is set * up * djkajka * Apply suggestions from code review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-