- 03 Aug, 2024 1 commit
-
-
Frank (Haofan) Wang authored
-
- 02 Aug, 2024 2 commits
-
-
Sayak Paul authored
* fix tests * fix * float64 skip * remove sample_size. * remove * remove more * default_sample_size. * credit black forest for flux model. * skip * fix: tests * remove OriginalModelMixin * add transformer model test * add: transformer model tests
-
Sayak Paul authored
* feat: add pixart sigma pag. * inits. * fixes * fix * remove print. * copy paste methods to the pixart pag mixin * fix-copies * add documentation. * add tests. * remove correction file. * remove pag_applied_layers * empty
-
- 01 Aug, 2024 1 commit
-
-
Sayak Paul authored
add flux! Signed-off-by:
Adrien <adrien@huggingface.co> Co-authored-by:
Adrien <adrien.69740@gmail.com> Co-authored-by:
Anatoly Belikov <abelikov@singularitynet.io> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 30 Jul, 2024 2 commits
-
-
Yoach Lacombe authored
* WIP modeling code and pipeline * add custom attention processor + custom activation + add to init * correct ProjectionModel forward * add stable audio to __initèè * add autoencoder and update pipeline and modeling code * add half Rope * add partial rotary v2 * add temporary modfis to scheduler * add EDM DPM Solver * remove TODOs * clean GLU * remove att.group_norm to attn processor * revert back src/diffusers/schedulers/scheduling_dpmsolver_multistep.py * refactor GLU -> SwiGLU * remove redundant args * add channel multiples in autoencoder docstrings * changes in docsrtings and copyright headers * clean pipeline * further cleaning * remove peft and lora and fromoriginalmodel * Delete src/diffusers/pipelines/stable_audio/diffusers.code-workspace * make style * dummy models * fix copied from * add fast oobleck tests * add brownian tree * oobleck autoencoder slow tests * remove TODO * fast stable audio pipeline tests * add slow tests * make style * add first version of docs * wrap is_torchsde_available to the scheduler * fix slow test * test with input waveform * add input waveform * remove some todos * create stableaudio gaussian projection + make style * add pipeline to toctree * fix copied from * make quality * refactor timestep_features->time_proj * refactor joint_attention_kwargs->cross_attention_kwargs * remove forward_chunk * move StableAudioDitModel to transformers folder * correct convert + remove partial rotary embed * apply suggestions from yiyixuxu -> removing attn.kv_heads * remove temb * remove cross_attention_kwargs * further removal of cross_attention_kwargs * remove text encoder autocast to fp16 * continue removing autocast * make style * refactor how text and audio are embedded * add paper * update example code * make style * unify projection model forward + fix device placement * make style * remove fuse qkv * apply suggestions from review * Update src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * make style * smaller models in fast tests * pass sequential offloading fast tests * add docs for vae and autoencoder * make style and update example * remove useless import * add cosine scheduler * dummy classes * cosine scheduler docs * better description of scheduler --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
* fix: animate diff lora stuff. * fix scaling function for UNetMotionModel * emoty
-
- 26 Jul, 2024 5 commits
-
-
Álvaro Somoza authored
* initial draft * apply suggestions * fix failing test * added ipa to img2img * add docs * apply suggestions
-
Aryan authored
-
Aryan authored
* initial sparse control model draft * remove unnecessary implementation * copy animatediff pipeline * remove deprecated callbacks * update * update pipeline implementation progress * make style * make fix-copies * update progress * add partially working pipeline * remove debug prints * add model docs * dummy objects * improve motion lora conversion script * fix bugs * update docstrings * remove unnecessary model params; docs * address review comment * add copied from to zero_module * copy animatediff test * add fast tests * update docs * update * update pipeline docs * fix expected slice values * fix license * remove get_down_block usage * remove temporal_double_self_attention from get_down_block * update * update docs with org and documentation images * make from_unet work in sparsecontrolnetmodel * add latest freeinit test from #8969 * make fix-copies * LoraLoaderMixin -> StableDiffsuionLoraLoaderMixin
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space * rewrite fuse_lora a bit. * feedback * copy over load_lora_into_text_encoder. * address dhruv's feedback. * fix-copies * fix issubclass. * num_fused_loras * fix * fix * remove mapping * up * fix * style * fix-copies * change to SD3TransformerLoRALoadersMixin * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up * handle wuerstchen * up * move lora to lora_pipeline.py * up * fix-copies * fix documentation. * comment set_adapters(). * fix-copies * fix set_adapters() at the model level. * fix? * fix * loraloadermixin. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Sayak Paul authored
remove all is from auraflow.
-
- 25 Jul, 2024 3 commits
-
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space * rewrite fuse_lora a bit. * feedback * copy over load_lora_into_text_encoder. * address dhruv's feedback. * fix-copies * fix issubclass. * num_fused_loras * fix * fix * remove mapping * up * fix * style * fix-copies * change to SD3TransformerLoRALoadersMixin * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up * handle wuerstchen * up * move lora to lora_pipeline.py * up * fix-copies * fix documentation. * comment set_adapters(). * fix-copies * fix set_adapters() at the model level. * fix? * fix --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Aryan authored
* speed up animatediff tests * fix pia test_ip_adapter_single * fix tests/pipelines/pia/test_pia.py::PIAPipelineFastTests::test_dict_tuple_outputs_equivalent * update * fix ip adapter tests * skip test_from_pipe_consistent_config tests * fix prompt_embeds test * update test_from_pipe_consistent_config tests * fix expected_slice values * remove temporal_norm_num_groups from UpBlockMotion --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
- 24 Jul, 2024 2 commits
-
-
Sayak Paul authored
* remove residual i. * rename to aura_flow in pipeline test
-
Sayak Paul authored
* start debugging the problem, * start * fix * fix * fix imports. * handle hunyuan * remove residuals. * add a check for making sure there's appropriate procs. * add more rigor to the tests. * fix test * remove redundant check * fix-copies * move check_qkv_fusion_matches_attn_procs_length and check_qkv_fusion_processors_exist.
-
- 23 Jul, 2024 1 commit
-
-
Vishnu V Jaddipal authored
* Add attentionless VAE support * make style and quality, fix-copies --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Jul, 2024 2 commits
-
-
王奇勋 authored
* 2d rotary pos emb dim * make style --------- Co-authored-by:haofanwang <haofanwang.ai@gmail.com>
-
shinetzh authored
* fix loop bug in SlicedAttnProcessor --------- Co-authored-by:neoshang <neoshang@tencent.com>
-
- 18 Jul, 2024 2 commits
-
-
Sayak Paul authored
* remove resume_download * fix: _fetch_index_file call. * remove resume_download from docs.
-
Sayak Paul authored
add disable forward chunking to SD3 transformer.
-
- 12 Jul, 2024 2 commits
-
-
Sayak Paul authored
* add pipeline documentation. * add api spec for pipeline * model documentation * model spec
-
Dhruv Nair authored
* update * update * update * update
-
- 11 Jul, 2024 4 commits
-
-
Dhruv Nair authored
update
-
Sayak Paul authored
* add lavender flow transformer --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Xin Ma authored
* add Latte to diffusers * remove print * remove print * remove print * remove unuse codes * remove layer_norm_latte and add a flag * remove layer_norm_latte and add a flag * update latte_pipeline * update latte_pipeline * remove unuse squeeze * add norm_hidden_states.ndim == 2: # for Latte * fixed test latte pipeline bugs * fixed test latte pipeline bugs * delete sh * add doc for latte * add licensing * Move Transformer3DModelOutput to modeling_outputs * give a default value to sample_size * remove the einops dependency * change norm2 for latte * modify pipeline of latte * update test for Latte * modify some codes for latte * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * video_length -> num_frames; update prepare_latents copied from * make fix-copies * make style * typo: videe -> video * update * modify for Latte pipeline * modify latte pipeline * modify latte pipeline * modify latte pipeline * modify latte pipeline * modify for Latte pipeline * Delete .vscode directory * make style * make fix-copies * add latte transformer 3d to docs _toctree.yml * update example * reduce frames for test * fixed bug of _text_preprocessing * set num frame to 1 for testing * remove unuse print * add text = self._clean_caption(text) again --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Alan Du authored
* Reformat docstring for `get_timestep_embedding` --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 08 Jul, 2024 2 commits
-
-
Tolga Cangöz authored
* Remove unused line --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
PommesPeter authored
--------- Co-authored-by:
zhuole1025 <zhuole1025@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 06 Jul, 2024 1 commit
-
-
YiYi Xu authored
* fix load sharded checkpoints from subfolder{ * style * os.path.join * add a small test --------- Co-authored-by:sayakpaul <spsayakpaul@gmail.com>
-
- 04 Jul, 2024 1 commit
-
-
Sayak Paul authored
fix sharding tests
-
- 03 Jul, 2024 4 commits
-
-
XCL authored
* add conversion files; changed controlnet for hunyuandit * style --------- Co-authored-by:
xingchaoliu <xingchaoliu@tencent.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
Sayak Paul authored
add dummy lora attention processors to prevent failures in other libs
-
Sayak Paul authored
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670)" This reverts commit a2071a18.
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space
-
- 02 Jul, 2024 3 commits
-
-
YiYi Xu authored
* add * update sd3 controlnet * Update src/diffusers/models/controlnet_sd3.py --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Dhruv Nair authored
* update * Update src/diffusers/models/unets/unet_motion_model.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
YiYi Xu authored
up Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Jul, 2024 2 commits
-
-
Haofan Wang authored
* Update controlnet_sd3.py --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
XCL authored
* add v1.2 support --------- Co-authored-by:
xingchaoliu <xingchaoliu@tencent.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-