- 03 Dec, 2024 1 commit
-
-
Emmanuel Benazera authored
* fix: missing AutoencoderKL lora adapter * fix --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Dec, 2024 2 commits
-
-
Pedro Cuenca authored
* Workaround for upscale with large output tensors. Fixes #10040. * Fix scale when output_size is given * Style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
YiYi Xu authored
* add
-
- 29 Nov, 2024 1 commit
-
-
Sayak Paul authored
compute fourier features in FP32.
-
- 27 Nov, 2024 1 commit
-
-
YiYi Xu authored
* add model/pipeline Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 23 Nov, 2024 1 commit
-
-
Aryan authored
* update --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Nov, 2024 3 commits
-
-
YiYi Xu authored
* fix
-
linjiapro authored
* improve control net index --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
* feat: add lora support to Mochi-1.
-
- 19 Nov, 2024 1 commit
-
-
Bagheera authored
* add skip_layers argument to SD3 transformer model class * add unit test for skip_layers in stable diffusion 3 * sd3: pipeline should support skip layer guidance * up --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 18 Nov, 2024 2 commits
-
-
Yuxuan.Zhang authored
* CogVideoX1_1PatchEmbed test * 1360 * 768 * refactor * make style * update docs * add modeling tests for cogvideox 1.5 * update * make fix-copies * add ofs embed(for convert) * add ofs embed(for convert) * more resolution for cogvideox1.5-5b-i2v * use even number of latent frames only * update pipeline implementations * make style * set patch_size_t as None by default * #skip frames 0 * refactor * make style * update docs * fix ofs_embed * update docs * invert_scale_latents * update * fix * Update docs/source/en/api/pipelines/cogvideox.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/cogvideox.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/cogvideox.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/cogvideox.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/cogvideox_transformer_3d.py * update conversion script * remove copied from * fix test * Update docs/source/en/api/pipelines/cogvideox.md * Update docs/source/en/api/pipelines/cogvideox.md * Update docs/source/en/api/pipelines/cogvideox.md * Update docs/source/en/api/pipelines/cogvideox.md --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
ちくわぶ authored
Add all AttnProcessor in `AttentionProcessor` type
-
- 09 Nov, 2024 1 commit
-
-
Eliseu Silva authored
* Feature IP Adapter Xformers Attention Processor: this fix error loading incorrect attention processor when setting Xformers attn after load ip adapter scale, issues: #8863 #8872
-
- 08 Nov, 2024 1 commit
-
-
Michael Tkachuk authored
* refactored
-
- 07 Nov, 2024 1 commit
-
-
Sayak Paul authored
* move vae flax module. * controlnet module. * prepare for PR. * revert a commit * gracefully deprecate controlnet deps. * fix * fix doc path * fix-copies * fix path * style * style * conflicts * fix * fix-copies * sparsectrl. * updates * fix * updates * updates * updates * fix --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 05 Nov, 2024 1 commit
-
-
Aryan authored
* update * udpate * update transformer * make style * fix * add conversion script * update * fix * update * fix * update * fixes * make style * update * update * update * init * update * update * add * up * up * up * update * mochi transformer * remove original implementation * make style * update inits * update conversion script * docs * Update src/diffusers/pipelines/mochi/pipeline_mochi.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/pipelines/mochi/pipeline_mochi.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * fix docs * pipeline fixes * make style * invert sigmas in scheduler; fix pipeline * fix pipeline num_frames * flip proj and gate in swiglu * make style * fix * make style * fix tests * latent mean and std fix * update * cherry-pick 1069d210e1b9e84a366cdc7a13965626ea258178 * remove additional sigma already handled by flow match scheduler * fix * remove hardcoded value * replace conv1x1 with linear * Update src/diffusers/pipelines/mochi/pipeline_mochi.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * framewise decoding and conv_cache * make style * Apply suggestions from code review * mochi vae encoder changes * rebase correctly * Update scripts/convert_mochi_to_diffusers.py * fix tests * fixes * make style * update * make style * update * add framewise and tiled encoding * make style * make original vae implementation behaviour the default; note: framewise encoding does not work * remove framewise encoding implementation due to presence of attn layers * fight test 1 * fight test 2 --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 01 Nov, 2024 1 commit
-
-
Leo Jiang authored
* NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX --------- Co-authored-by:蒋硕 <jiangshuo9@h-partners.com>
-
- 30 Oct, 2024 1 commit
-
-
Aryan authored
fix
-
- 29 Oct, 2024 1 commit
-
-
Aryan authored
* update * refactor transformer part 1 * refactor part 2 * refactor part 3 * make style * refactor part 4; modeling tests * make style * refactor part 5 * refactor part 6 * gradient checkpointing * pipeline tests (broken atm) * update * add coauthor Co-Authored-By:
Huan Yang <hyang@fastmail.com> * refactor part 7 * add docs * make style * add coauthor Co-Authored-By:
YiYi Xu <yixu310@gmail.com> * make fix-copies * undo unrelated change * revert changes to embeddings, normalization, transformer * refactor part 8 * make style * refactor part 9 * make style * fix * apply suggestions from review * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update example * remove attention mask for self-attention * update * copied from * update * update --------- Co-authored-by:
Huan Yang <hyang@fastmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 22 Oct, 2024 1 commit
-
-
Sayak Paul authored
* bnb follow ups. * add a warning when dtypes mismatch. * fx-copies * clear cache. * check_if_quantized_param * add a check on shape. * updates * docs * improve readability. * resources. * fix
-
- 21 Oct, 2024 2 commits
-
-
YiYi Xu authored
* update some docs and tests! --------- Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com>
-
Sayak Paul authored
* quantization config. * fix-copies * fix * modules_to_not_convert * add bitsandbytes utilities. * make progress. * fixes * quality * up * up rotary embedding refactor 2: update comments, fix dtype for use_real=False (#9312) fix notes and dtype up up * minor * up * up * fix * provide credits where due. * make configurations work. * fixes * fix * update_missing_keys * fix * fix * make it work. * fix * provide credits to transformers. * empty commit * handle to() better. * tests * change to bnb from bitsandbytes * fix tests fix slow quality tests SD3 remark fix complete int4 tests add a readme to the test files. add model cpu offload tests warning test * better safeguard. * change merging status * courtesy to transformers. * move upper. * better * make the unused kwargs warning friendlier. * harmonize changes with https://github.com/huggingface/transformers/pull/33122 * style * trainin tests * feedback part i. * Add Flux inpainting and Flux Img2Img (#9135) --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Update `UNet2DConditionModel`'s error messages (#9230) * refactor [CI] Update Single file Nightly Tests (#9357) * update * update feedback. improve README for flux dreambooth lora (#9290) * improve readme * improve readme * improve readme * improve readme fix one uncaught deprecation warning for accessing vae_latent_channels in VaeImagePreprocessor (#9372) deprecation warning vae_latent_channels add mixed int8 tests and more tests to nf4. [core] Freenoise memory improvements (#9262) * update * implement prompt interpolation * make style * resnet memory optimizations * more memory optimizations; todo: refactor * update * update animatediff controlnet with latest changes * refactor chunked inference changes * remove print statements * update * chunk -> split * remove changes from incorrect conflict resolution * remove changes from incorrect conflict resolution * add explanation of SplitInferenceModule * update docs * Revert "update docs" This reverts commit c55a50a271b2cefa8fe340a4f2a3ab9b9d374ec0. * update docstring for freenoise split inference * apply suggestions from review * add tests * apply suggestions from review quantization docs. docs. * Revert "Add Flux inpainting and Flux Img2Img (#9135)" This reverts commit 5799954dd4b3d753c7c1b8d722941350fe4f62ca. * tests * don * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * contribution guide. * changes * empty * fix tests * harmonize with https://github.com/huggingface/transformers/pull/33546 . * numpy_cosine_distance * config_dict modification. * remove if config comment. * note for load_state_dict changes. * float8 check. * quantizer. * raise an error for non-True low_cpu_mem_usage values when using quant. * low_cpu_mem_usage shenanigans when using fp32 modules. * don't re-assign _pre_quantization_type. * make comments clear. * remove comments. * handle mixed types better when moving to cpu. * add tests to check if we're throwing warning rightly. * better check. * fix 8bit test_quality. * handle dtype more robustly. * better message when keep_in_fp32_modules. * handle dtype casting. * fix dtype checks in pipeline. * fix warning message. * Update src/diffusers/models/modeling_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * mitigate the confusing cpu warning --------- Co-authored-by:
Vishnu V Jaddipal <95531133+Gothos@users.noreply.github.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 16 Oct, 2024 1 commit
-
-
Aryan authored
* update * apply suggestions from review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Oct, 2024 3 commits
-
-
YiYi Xu authored
* Add support of Xlabs Controlnets --------- Co-authored-by:Anzhella Pankratova <son0shad@gmail.com>
-
Ahnjj_DEV authored
* Fix some documentation in ./src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/adapter.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * run make style * make style & fix * make style : 0.1.5 version ruff * revert changes to examples --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
wony617 authored
* [docs] refactoring docstrings in `models/embeddings_flax.py` * Update src/diffusers/models/embeddings_flax.py * make style --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 14 Oct, 2024 1 commit
-
-
Yuxuan.Zhang authored
* merge 9588 * max_shard_size="5GB" for colab running * conversion script updates; modeling test; refactor transformer * make fix-copies * Update convert_cogview3_to_diffusers.py * initial pipeline draft * make style * fight bugs
🐛 🪳 * add example * add tests; refactor * make style * make fix-copies * add co-author YiYi Xu <yixu310@gmail.com> * remove files * add docs * add co-author Co-Authored-By:YiYi Xu <yixu310@gmail.com> * fight docs * address reviews * make style * make model work * remove qkv fusion * remove qkv fusion tets * address review comments * fix make fix-copies error * remove None and TODO * for FP16(draft) * make style * remove dynamic cfg * remove pooled_projection_dim as a parameter * fix tests --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 08 Oct, 2024 1 commit
-
-
sanaka authored
Fix the bug that `joint_attention_kwargs` is not passed to the FLUX's transformer attention processors (#9517) * Update transformer_flux.py
-
- 02 Oct, 2024 2 commits
-
-
Xiangchendong authored
Co-authored-by:Aryan <aryan@huggingface.co>
-
Darren Hsu authored
* Support bfloat16 for Upsample2D * Add test and use is_torch_version * Resolve comments and add decorator * Simplify require_torch_version_greater_equal decorator * Run make style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 28 Sep, 2024 2 commits
-
-
Aryan authored
* remove conv cache from the layer and pass as arg instead * make style * yiyi's cleaner implementation Co-Authored-By:
YiYi Xu <yixu310@gmail.com> * sayak's compiled implementation Co-Authored-By:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
* fix variant-idenitification. * fix variant * fix sharded variant checkpoint loading. * Apply suggestions from code review * fixes. * more fixes. * remove print. * fixes * fixes * comments * fixes * apply suggestions. * hub_utils.py * fix test * updates * fixes * fixes * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * updates. * removep patch file. --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 26 Sep, 2024 2 commits
-
-
Aryan authored
* bugfix: precedence of operations should be slicing -> tiling * fix typo * fix another typo * deprecate current implementation of tiled_encode and use new impl * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
YiYi Xu authored
* flux controlnet mode to take into account batch size * incorporate yiyixuxu's suggestions (cleaner logic) as well as clean up control mode handling for multi case * fix * fix use_guidance when controlnet is a multi and does not have config --------- Co-authored-by:
Christopher Beckham <christopher.j.beckham@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 25 Sep, 2024 1 commit
-
-
YiYi Xu authored
* up * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 23 Sep, 2024 1 commit
-
-
pibbo88 authored
Fix the bug of sd3 controlnet training when using gradient_checkpointing. Refer to issue #9496
-
- 19 Sep, 2024 1 commit
-
-
Aryan authored
* cogvideox lora training draft * update * update * update * update * update * make fix-copies * update * update * apply suggestions from review * apply suggestions from reveiw * fix typo * Update examples/cogvideo/train_cogvideox_lora.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * fix lora alpha * use correct lora scaling for final test pipeline * Update examples/cogvideo/train_cogvideox_lora.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * apply suggestions from review; prodigy optimizer YiYi Xu <yixu310@gmail.com> * add tests * make style * add README * update * update * make style * fix * update * add test skeleton * revert lora utils changes * add cleaner modifications to lora testing utils * update lora tests * deepspeed stuff * add requirements.txt * deepspeed refactor * add lora stuff to img2vid pipeline to fix tests * fight tests * add co-authors Co-Authored-By:
Fu-Yun Wang <1697256461@qq.com> Co-Authored-By:
zR <2448370773@qq.com> * fight lora runner tests * import Dummy optim and scheduler only wheh required * update docs * add coauthors Co-Authored-By:
Fu-Yun Wang <1697256461@qq.com> * remove option to train text encoder Co-Authored-By:
bghira <bghira@users.github.com> * update tests * fight more tests * update * fix vid2vid * fix typo * remove lora tests; todo in follow-up PR * undo img2vid changes * remove text encoder related changes in lora loader mixin * Revert "remove text encoder related changes in lora loader mixin" This reverts commit f8a8444487db27859be812866db4e8cec7f25691. * update * round 1 of fighting tests * round 2 of fighting tests * fix copied from comment * fix typo in lora test * update styling Co-Authored-By:
YiYi Xu <yixu310@gmail.com> --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
zR <2448370773@qq.com> Co-authored-by:
Fu-Yun Wang <1697256461@qq.com> Co-authored-by:
bghira <bghira@users.github.com>
-
- 16 Sep, 2024 1 commit
-
-
Yuxuan.Zhang authored
* draft Init * draft * vae encode image * make style * image latents preparation * remove image encoder from conversion script * fix minor bugs * make pipeline work * make style * remove debug prints * fix imports * update example * make fix-copies * add fast tests * fix import * update vae * update docs * update image link * apply suggestions from review * apply suggestions from review * add slow test * make use of learned positional embeddings * apply suggestions from review * doc change * Update convert_cogvideox_to_diffusers.py * make style * final changes * make style * fix tests --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 11 Sep, 2024 2 commits
-
-
asfiyab-nvidia authored
Remove Squeeze op Signed-off-by:
Asfiya Baig <asfiyab@nvidia.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
fix some fast gpu tests.
-