"docs/vscode:/vscode.git/clone" did not exist on "404e2776f095b1a1295b4637e6dcdef6ca62d0f6"
- 06 Jun, 2025 1 commit
-
-
Sayak Paul authored
* add a test for group offloading + compilation. * tests
-
- 02 Jun, 2025 1 commit
-
-
Sayak Paul authored
chore: rename lora model-level tests.
-
- 26 May, 2025 2 commits
-
-
Sayak Paul authored
* remove compile cuda docker. * replace compile cuda docker path. * better manage compilation cache. * propagate similar to the pipeline tests. * remove unneeded compile test. * small. * don't check for deleted files.
-
Ishan Modi authored
* update * update * update * update * addressed PR comments * update * addressed PR comments * added tests * addressed PR comments * updates * update * addressed PR comments * update * fix style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 15 May, 2025 1 commit
-
-
Sayak Paul authored
* add tests for combining layerwise upcasting and groupoffloading. * feedback
-
- 14 May, 2025 1 commit
-
-
Seokhyeon Jeong authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 13 May, 2025 1 commit
-
-
Sayak Paul authored
* add tests for hidream transformer model. * fix * Update tests/models/transformers/test_models_transformer_hidream.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 12 May, 2025 1 commit
-
-
Evan Han authored
* Update test_models_transformer_ltx.py * Update test_models_transformer_ltx.py * Update test_models_transformer_ltx.py * Update test_models_transformer_ltx.py --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 11 May, 2025 1 commit
-
-
Sayak Paul authored
* start. * add tests for framepack transformer model. * merge conflicts. * make to square. * fixes
-
- 09 May, 2025 1 commit
-
-
Sayak Paul authored
* refactor hotswap tester. * fix seeds.. * add to nightly ci. * move comment. * move to nightly
-
- 07 May, 2025 1 commit
-
-
Aryan authored
* begin transformer conversion * refactor * refactor * refactor * refactor * refactor * refactor * update * add conversion script * add pipeline * make fix-copies * remove einops * update docs * gradient checkpointing * add transformer test * update * debug * remove prints * match sigmas * add vae pt. 1 * finish CV* vae * update * update * update * update * update * update * make fix-copies * update * make fix-copies * fix * update * update * make fix-copies * update * update tests * handle device and dtype for safety checker; required in latest diffusers * remove enable_gqa and use repeat_interleave instead * enforce safety checker; use dummy checker in fast tests * add review suggestion for ONNX export Co-Authored-By:
Asfiya Baig <asfiyab@nvidia.com> * fix safety_checker issues when not passed explicitly We could either do what's done in this commit, or update the Cosmos examples to explicitly pass the safety checker * use cosmos guardrail package * auto format docs * update conversion script to support 14B models * update name CosmosPipeline -> CosmosTextToWorldPipeline * update docs * fix docs * fix group offload test failing for vae --------- Co-authored-by:
Asfiya Baig <asfiyab@nvidia.com>
-
- 05 May, 2025 1 commit
-
-
Connector Switch authored
* implement tiled encode/decode * address review comments
-
- 01 May, 2025 1 commit
-
-
Sayak Paul authored
* [tests] Add torch.compile() test for WanTransformer3DModel * fix wan recompilation issues. * style --------- Co-authored-by:tongyu0924 <winnie920924@gmail.com>
-
- 30 Apr, 2025 2 commits
-
-
Yao Matrix authored
* make autoencoders. controlnet_flux and wan_transformer3d_single_file pass on XPU Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * Apply style fixes --------- Signed-off-by:
Yao Matrix <matrix.yao@intel.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
tongyu authored
* Update test_models_transformer_hunyuan_video.py * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 28 Apr, 2025 3 commits
-
-
Sayak Paul authored
fix import.
-
Yao Matrix authored
* enable test_layerwise_casting_memory cases on XPU Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * fix style Signed-off-by:
Yao Matrix <matrix.yao@intel.com> --------- Signed-off-by:
Yao Matrix <matrix.yao@intel.com>
-
Sayak Paul authored
[tests] add tests to check for graph breaks, recompilation, cuda syncs in pipelines during torch.compile() (#11085) * test for better torch.compile stuff. * fixes * recompilation and graph break. * clear compilation cache. * change to modeling level test. * allow running compilation tests during nightlies.
-
- 09 Apr, 2025 2 commits
-
-
Dhruv Nair authored
* update * update * update * update
-
hlky authored
* AutoModel * ... * lol * ... * add test * update * make fix-copies --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 08 Apr, 2025 2 commits
-
-
Sayak Paul authored
* implement record_stream for better performance. * fix * style. * merge #11097 * Update src/diffusers/hooks/group_offloading.py Co-authored-by:
Aryan <aryan@huggingface.co> * fixes * docstring. * remaining todos in low_cpu_mem_usage * tests * updates to docs. --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
Benjamin Bossan authored
* [WIP][LoRA] Implement hot-swapping of LoRA This PR adds the possibility to hot-swap LoRA adapters. It is WIP. Description As of now, users can already load multiple LoRA adapters. They can offload existing adapters or they can unload them (i.e. delete them). However, they cannot "hotswap" adapters yet, i.e. substitute the weights from one LoRA adapter with the weights of another, without the need to create a separate LoRA adapter. Generally, hot-swapping may not appear not super useful but when the model is compiled, it is necessary to prevent recompilation. See #9279 for more context. Caveats To hot-swap a LoRA adapter for another, these two adapters should target exactly the same layers and the "hyper-parameters" of the two adapters should be identical. For instance, the LoRA alpha has to be the same: Given that we keep the alpha from the first adapter, the LoRA scaling would be incorrect for the second adapter otherwise. Theoretically, we could override the scaling dict with the alpha values derived from the second adapter's config, but changing the dict will trigger a guard for recompilation, defeating the main purpose of the feature. I also found that compilation flags can have an impact on whether this works or not. E.g. when passing "reduce-overhead", there will be errors of the type: > input name: arg861_1. data pointer changed from 139647332027392 to 139647331054592 I don't know enough about compilation to determine whether this is problematic or not. Current state This is obviously WIP right now to collect feedback and discuss which direction to take this. If this PR turns out to be useful, the hot-swapping functions will be added to PEFT itself and can be imported here (or there is a separate copy in diffusers to avoid the need for a min PEFT version to use this feature). Moreover, more tests need to be added to better cover this feature, although we don't necessarily need tests for the hot-swapping functionality itself, since those tests will be added to PEFT. Furthermore, as of now, this is only implemented for the unet. Other pipeline components have yet to implement this feature. Finally, it should be properly documented. I would like to collect feedback on the current state of the PR before putting more time into finalizing it. * Reviewer feedback * Reviewer feedback, adjust test * Fix, doc * Make fix * Fix for possible g++ error * Add test for recompilation w/o hotswapping * Make hotswap work Requires https://github.com/huggingface/peft/pull/2366 More changes to make hotswapping work. Together with the mentioned PEFT PR, the tests pass for me locally. List of changes: - docstring for hotswap - remove code copied from PEFT, import from PEFT now - adjustments to PeftAdapterMixin.load_lora_adapter (unfortunately, some state dict renaming was necessary, LMK if there is a better solution) - adjustments to UNet2DConditionLoadersMixin._process_lora: LMK if this is even necessary or not, I'm unsure what the overall relationship is between this and PeftAdapterMixin.load_lora_adapter - also in UNet2DConditionLoadersMixin._process_lora, I saw that there is no LoRA unloading when loading the adapter fails, so I added it there (in line with what happens in PeftAdapterMixin.load_lora_adapter) - rewritten tests to avoid shelling out, make the test more precise by making sure that the outputs align, parametrize it - also checked the pipeline code mentioned in this comment: https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871; when running this inside the with torch._dynamo.config.patch(error_on_recompile=True) context, there is no error, so I think hotswapping is now working with pipelines. * Address reviewer feedback: - Revert deprecated method - Fix PEFT doc link to main - Don't use private function - Clarify magic numbers - Add pipeline test Moreover: - Extend docstrings - Extend existing test for outputs != 0 - Extend existing test for wrong adapter name * Change order of test decorators parameterized.expand seems to ignore skip decorators if added in last place (i.e. innermost decorator). * Split model and pipeline tests Also increase test coverage by also targeting conv2d layers (support of which was added recently on the PEFT PR). * Reviewer feedback: Move decorator to test classes ... instead of having them on each test method. * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> * Reviewer feedback: version check, TODO comment * Add enable_lora_hotswap method * Reviewer feedback: check _lora_loadable_modules * Revert changes in unet.py * Add possibility to ignore enabled at wrong time * Fix docstrings * Log possible PEFT error, test * Raise helpful error if hotswap not supported I.e. for the text encoder * Formatting * More linter * More ruff * Doc-builder complaint * Update docstring: - mention no text encoder support yet - make it clear that LoRA is meant - mention that same adapter name should be passed * Fix error in docstring * Update more methods with hotswap argument - SDXL - SD3 - Flux No changes were made to load_lora_into_transformer. * Add hotswap argument to load_lora_into_transformer For SD3 and Flux. Use shorter docstring for brevity. * Extend docstrings * Add version guards to tests * Formatting * Fix LoRA loading call to add prefix=None See: https://github.com/huggingface/diffusers/pull/10187#issuecomment-2717571064 * Run make fix-copies * Add hot swap documentation to the docs * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 05 Apr, 2025 1 commit
-
-
Mikko Tukiainen authored
* Add missing 'gradient_checkpointing = False' attr * Add (limited) tests for Mochi autoencoder * Apply style fixes * pass 'conv_cache' as arg instead of kwarg --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 24 Mar, 2025 1 commit
-
-
Aryan authored
* update * update * update * add tests * update docs * raise value error * warning for true cfg and guidance scale * fix test
-
- 20 Mar, 2025 1 commit
-
-
Fanli Lin authored
* enable bnb on xpu * add 2 more cases * add missing change * add missing change * add one more * enable cuda only tests on xpu * enable big gpu cases
-
- 07 Mar, 2025 1 commit
-
-
Aryan authored
* update * update * update * add tests * update * add model tests * update docs * update * update example * fix defaults * update
-
- 04 Mar, 2025 1 commit
-
-
Fanli Lin authored
* initial comit * fix empty cache * fix one more * fix style * update device functions * update * update * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * with gc.collect * update * make style * check_torch_dependencies * add mps empty cache * add changes * bug fix * enable on xpu * update more cases * revert * revert back * Update test_stable_diffusion_xl.py * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py Co-authored-by:
hlky <hlky@hlky.ac> * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> * add test marker --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
- 03 Mar, 2025 1 commit
-
-
Bubbliiiing authored
* Update EasyAnimate V5.1 * Add docs && add tests && Fix comments problems in transformer3d and vae * delete comments and remove useless import * delete process * Update EXAMPLE_DOC_STRING * rename transformer file * make fix-copies * make style * refactor pt. 1 * update toctree.yml * add model tests * Update layer_norm for norm_added_q and norm_added_k in Attention * Fix processor problem * refactor vae * Fix problem in comments * refactor tiling; remove einops dependency * fix docs path * make fix-copies * Update src/diffusers/pipelines/easyanimate/pipeline_easyanimate_control.py * update _toctree.yml * fix test * update * update * update * make fix-copies * fix tests --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 02 Mar, 2025 1 commit
-
-
YiYi Xu authored
* Add wanx pipeline, model and example * wanx_merged_v1 * change WanX into Wan * fix i2v fp32 oom error Link: https://code.alibaba-inc.com/open_wanx2/diffusers/codereview/20607813 * support t2v load fp32 ckpt * add example * final merge v1 * Update autoencoder_kl_wan.py * up * update middle, test up_block * up up * one less nn.sequential * up more * up * more * [refactor] [wip] Wan transformer/pipeline (#10926) * update * update * refactor rope * refactor pipeline * make fix-copies * add transformer test * update * update * make style * update tests * tests * conversion script * conversion script * update * docs * remove unused code * fix _toctree.yml * update dtype * fix test * fix tests: scale * up * more * Apply suggestions from code review * Apply suggestions from code review * style * Update scripts/convert_wan_to_diffusers.py * update docs * fix --------- Co-authored-by:
Yitong Huang <huangyitong.hyt@alibaba-inc.com> Co-authored-by:
亚森 <wangjiayu.wjy@alibaba-inc.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 27 Feb, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Feb, 2025 1 commit
-
-
Aryan authored
* update * make fix-copies * update * tests * update * update * add co-author Co-Authored-By:
Langdx <82783347+Langdx@users.noreply.github.com> * add co-author Co-Authored-By:
howe <howezhang2018@gmail.com> * update --------- Co-authored-by:
Langdx <82783347+Langdx@users.noreply.github.com> Co-authored-by:
howe <howezhang2018@gmail.com>
-
- 19 Feb, 2025 1 commit
-
-
Marc Sun authored
* first draft model loading refactor * revert name change * fix bnb * revert name * fix dduf * fix huanyan * style * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * suggestions from reviews * Update src/diffusers/models/modeling_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove safetensors check * fix default value * more fix from suggestions * revert logic for single file * style * typing + fix couple of issues * improve speed * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> * fp8 dtype * add tests * rename resolved_archive_file to resolved_model_file * format * map_location default cpu * add utility function * switch to smaller model + test inference * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * rm comment * add log * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * add decorator * cosine sim instead * fix use_keep_in_fp32_modules * comm --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 15 Feb, 2025 2 commits
-
-
Yuxuan Zhang authored
* init * encode with glm * draft schedule * feat(scheduler): Add CogView scheduler implementation * feat(embeddings): add CogView 2D rotary positional embedding * 1 * Update pipeline_cogview4.py * fix the timestep init and sigma * update latent * draft patch(not work) * fix * [WIP][cogview4]: implement initial CogView4 pipeline Implement the basic CogView4 pipeline structure with the following changes: - Add CogView4 pipeline implementation - Implement DDIM scheduler for CogView4 - Add CogView3Plus transformer architecture - Update embedding models Current limitations: - CFG implementation uses padding for sequence length alignment - Need to verify transformer inference alignment with Megatron TODO: - Consider separate forward passes for condition/uncondition instead of padding approach * [WIP][cogview4][refactor]: Split condition/uncondition forward pass in CogView4 pipeline Split the forward pass for conditional and unconditional predictions in the CogView4 pipeline to match the original implementation. The noise prediction is now done separately for each case before combining them for guidance. However, the results still need improvement. This is a work in progress as the generated images are not yet matching expected quality. * use with -2 hidden state * remove text_projector * 1 * [WIP] Add tensor-reload to align input from transformer block * [WIP] for older glm * use with cogview4 transformers forward twice of u and uc * Update convert_cogview4_to_diffusers.py * remove this * use main example * change back * reset * setback * back * back 4 * Fix qkv conversion logic for CogView4 to Diffusers format * back5 * revert to sat to cogview4 version * update a new convert from megatron * [WIP][cogview4]: implement CogView4 attention processor Add CogView4AttnProcessor class for implementing scaled dot-product attention with rotary embeddings for the CogVideoX model. This processor concatenates encoder and hidden states, applies QKV projections and RoPE, but does not include spatial normalization. TODO: - Fix incorrect QKV projection weights - Resolve ~25% error in RoPE implementation compared to Megatron * [cogview4] implement CogView4 transformer block Implement CogView4 transformer block following the Megatron architecture: - Add multi-modulate and multi-gate mechanisms for adaptive layer normalization - Implement dual-stream attention with encoder-decoder structure - Add feed-forward network with GELU activation - Support rotary position embeddings for image tokens The implementation follows the original CogView4 architecture while adapting it to work within the diffusers framework. * with new attn * [bugfix] fix dimension mismatch in CogView4 attention * [cogview4][WIP]: update final normalization in CogView4 transformer Refactored the final normalization layer in CogView4 transformer to use separate layernorm and AdaLN operations instead of combined AdaLayerNormContinuous. This matches the original implementation but needs validation. Needs verification against reference implementation. * 1 * put back * Update transformer_cogview4.py * change time_shift * Update pipeline_cogview4.py * change timesteps * fix * change text_encoder_id * [cogview4][rope] align RoPE implementation with Megatron - Implement apply_rope method in attention processor to match Megatron's implementation - Update position embeddings to ensure compatibility with Megatron-style rotary embeddings - Ensure consistent rotary position encoding across attention layers This change improves compatibility with Megatron-based models and provides better alignment with the original implementation's positional encoding approach. * [cogview4][bugfix] apply silu activation to time embeddings in CogView4 Applied silu activation to time embeddings before splitting into conditional and unconditional parts in CogView4Transformer2DModel. This matches the original implementation and helps ensure correct time conditioning behavior. * [cogview4][chore] clean up pipeline code - Remove commented out code and debug statements - Remove unused retrieve_timesteps function - Clean up code formatting and documentation This commit focuses on code cleanup in the CogView4 pipeline implementation, removing unnecessary commented code and improving readability without changing functionality. * [cogview4][scheduler] Implement CogView4 scheduler and pipeline * now It work * add timestep * batch * change convert scipt * refactor pt. 1; make style * refactor pt. 2 * refactor pt. 3 * add tests * make fix-copies * update toctree.yml * use flow match scheduler instead of custom * remove scheduling_cogview.py * add tiktoken to test dependencies * Update src/diffusers/models/embeddings.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * apply suggestions from review * use diffusers apply_rotary_emb * update flow match scheduler to accept timesteps * fix comment * apply review sugestions * Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> --------- Co-authored-by:
三洋三洋 <1258009915@qq.com> Co-authored-by:
OleehyO <leehy0357@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
YiYi Xu authored
* up
-
- 14 Feb, 2025 1 commit
-
-
Aryan authored
* update * fix * non_blocking; handle parameters and buffers * update * Group offloading with cuda stream prefetching (#10516) * cuda stream prefetch * remove breakpoints * update * copy model hook implementation from pab * update; ~very workaround based implementation but it seems to work as expected; needs cleanup and rewrite * more workarounds to make it actually work * cleanup * rewrite * update * make sure to sync current stream before overwriting with pinned params not doing so will lead to erroneous computations on the GPU and cause bad results * better check * update * remove hook implementation to not deal with merge conflict * re-add hook changes * why use more memory when less memory do trick * why still use slightly more memory when less memory do trick * optimise * add model tests * add pipeline tests * update docs * add layernorm and groupnorm * address review comments * improve tests; add docs * improve docs * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * apply suggestions from code review * update tests * apply suggestions from review * enable_group_offloading -> enable_group_offload for naming consistency * raise errors if multiple offloading strategies used; add relevant tests * handle .to() when group offload applied * refactor some repeated code * remove unintentional change from merge conflict * handle .cuda() --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 11 Feb, 2025 4 commits
-
-
Le Zhuo authored
* Add support for lumina2 --------- Co-authored-by:
csuhan <hanjiaming@whu.edu.cn> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
hlky <hlky@hlky.ac>
-
Shitao Xiao authored
* OmniGen model.py * update OmniGenTransformerModel * omnigen pipeline * omnigen pipeline * update omnigen_pipeline * test case for omnigen * update omnigenpipeline * update docs * update docs * offload_transformer * enable_transformer_block_cpu_offload * update docs * reformat * reformat * reformat * update docs * update docs * make style * make style * Update docs/source/en/api/models/omnigen_transformer.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update docs * revert changes to examples/ * update OmniGen2DModel * make style * update test cases * Update docs/source/en/api/pipelines/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update docs * typo * Update src/diffusers/models/embeddings.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/models/attention.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/models/transformers/transformer_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/models/transformers/transformer_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/models/transformers/transformer_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/omnigen/test_pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/omnigen/test_pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by:
hlky <hlky@hlky.ac> * consistent attention processor * updata * update * check_inputs * make style * update testpipeline * update testpipeline --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Mathias Parger authored
* speedup causal mask generation * fixing hunyuan attn mask test case
-
Sayak Paul authored
* add a test to check if we can train with layerwise casting. * updates * updates * style
-
- 28 Jan, 2025 1 commit
-
-
Sayak Paul authored
* conditionally check if compute capability is met. * log info. * fix condition. * updates * updates * updates * updates
-