- 17 Jul, 2025 1 commit
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 19 Jun, 2025 1 commit
-
-
Aryan authored
update
-
- 19 May, 2025 1 commit
-
-
Quentin Gallouédec authored
* Use HF Papers * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 11 Feb, 2025 1 commit
-
-
Le Zhuo authored
* Add support for lumina2 --------- Co-authored-by:
csuhan <hanjiaming@whu.edu.cn> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 18 Dec, 2024 1 commit
-
-
Qin Zhou authored
* Support pass kwargs to sd3 custom attention processor --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 16 Dec, 2024 1 commit
-
-
Aryan authored
* copy transformer * copy vae * copy pipeline * make fix-copies * refactor; make original code work with diffusers; test latents for comparison generated with this commit * move rope into pipeline; remove flash attention; refactor * begin conversion script * make style * refactor attention * refactor * refactor final layer * their mlp -> our feedforward * make style * add docs * refactor layer names * refactor modulation * cleanup * refactor norms * refactor activations * refactor single blocks attention * refactor attention processor * make style * cleanup a bit * refactor double transformer block attention * update mochi attn proc * use diffusers attention implementation in all modules; checkpoint for all values matching original * remove helper functions in vae * refactor upsample * refactor causal conv * refactor resnet * refactor * refactor * refactor * grad checkpointing * autoencoder test * fix scaling factor * refactor clip * refactor llama text encoding * add coauthor Co-Authored-By:
"Gregory D. Hunkins" <greg@ollano.com> * refactor rope; diff: 0.14990234375; reason and fix: create rope grid on cpu and move to device Note: The following line diverges from original behaviour. We create the grid on the device, whereas original implementation creates it on CPU and then moves it to device. This results in numerical differences in layerwise debugging outputs, but visually it is the same. * use diffusers timesteps embedding; diff: 0.10205078125 * rename * convert * update * add tests for transformer * add pipeline tests; text encoder 2 is not optional * fix attention implementation for torch * add example * update docs * update docs * apply suggestions from review * refactor vae * update * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py Co-authored-by:
hlky <hlky@hlky.ac> * make fix-copies * update --------- Co-authored-by:
"Gregory D. Hunkins" <greg@ollano.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 21 Oct, 2024 1 commit
-
-
YiYi Xu authored
* update some docs and tests! --------- Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com>
-
- 06 Sep, 2024 1 commit
-
-
Aryan authored
* update * implement prompt interpolation * make style * resnet memory optimizations * more memory optimizations; todo: refactor * update * update animatediff controlnet with latest changes * refactor chunked inference changes * remove print statements * update * chunk -> split * remove changes from incorrect conflict resolution * remove changes from incorrect conflict resolution * add explanation of SplitInferenceModule * update docs * Revert "update docs" This reverts commit c55a50a271b2cefa8fe340a4f2a3ab9b9d374ec0. * update docstring for freenoise split inference * apply suggestions from review * add tests * apply suggestions from review
-
- 28 Aug, 2024 1 commit
-
-
Aryan authored
* update * implement prompt interpolation * make style * resnet memory optimizations * more memory optimizations; todo: refactor * update * update animatediff controlnet with latest changes * refactor chunked inference changes * remove print statements * undo memory optimization changes * update docstrings * fix tests * fix pia tests * apply suggestions from review * add tests * update comment
-
- 07 Aug, 2024 1 commit
-
-
Aryan authored
* initial work draft for freenoise; needs massive cleanup * fix freeinit bug * add animatediff controlnet implementation * revert attention changes * add freenoise * remove old helper functions * add decode batch size param to all pipelines * make style * fix copied from comments * make fix-copies * make style * copy animatediff controlnet implementation from #8972 * add experimental support for num_frames not perfectly fitting context length, ocntext stride * make unet motion model lora work again based on #8995 * copy load video utils from #8972 * copied from AnimateDiff::prepare_latents * address the case where last batch of frames does not match length of indices in prepare latents * decode_batch_size->vae_batch_size; batch vae encode support in animatediff vid2vid * revert sparsectrl and sdxl freenoise changes * revert pia * add freenoise tests * make fix-copies * improve docstrings * add freenoise tests to animatediff controlnet * update tests * Update src/diffusers/models/unets/unet_motion_model.py * add freenoise to animatediff pag * address review comments * make style * update tests * make fix-copies * fix error message * remove copied from comment * fix imports in tests * update --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 06 Aug, 2024 1 commit
-
-
Aryan authored
remove ada_norm_continuous from norm3 list Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 30 Jul, 2024 1 commit
-
-
Yoach Lacombe authored
* WIP modeling code and pipeline * add custom attention processor + custom activation + add to init * correct ProjectionModel forward * add stable audio to __initèè * add autoencoder and update pipeline and modeling code * add half Rope * add partial rotary v2 * add temporary modfis to scheduler * add EDM DPM Solver * remove TODOs * clean GLU * remove att.group_norm to attn processor * revert back src/diffusers/schedulers/scheduling_dpmsolver_multistep.py * refactor GLU -> SwiGLU * remove redundant args * add channel multiples in autoencoder docstrings * changes in docsrtings and copyright headers * clean pipeline * further cleaning * remove peft and lora and fromoriginalmodel * Delete src/diffusers/pipelines/stable_audio/diffusers.code-workspace * make style * dummy models * fix copied from * add fast oobleck tests * add brownian tree * oobleck autoencoder slow tests * remove TODO * fast stable audio pipeline tests * add slow tests * make style * add first version of docs * wrap is_torchsde_available to the scheduler * fix slow test * test with input waveform * add input waveform * remove some todos * create stableaudio gaussian projection + make style * add pipeline to toctree * fix copied from * make quality * refactor timestep_features->time_proj * refactor joint_attention_kwargs->cross_attention_kwargs * remove forward_chunk * move StableAudioDitModel to transformers folder * correct convert + remove partial rotary embed * apply suggestions from yiyixuxu -> removing attn.kv_heads * remove temb * remove cross_attention_kwargs * further removal of cross_attention_kwargs * remove text encoder autocast to fp16 * continue removing autocast * make style * refactor how text and audio are embedded * add paper * update example code * make style * unify projection model forward + fix device placement * make style * remove fuse qkv * apply suggestions from review * Update src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * make style * smaller models in fast tests * pass sequential offloading fast tests * add docs for vae and autoencoder * make style and update example * remove useless import * add cosine scheduler * dummy classes * cosine scheduler docs * better description of scheduler --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 11 Jul, 2024 1 commit
-
-
Xin Ma authored
* add Latte to diffusers * remove print * remove print * remove print * remove unuse codes * remove layer_norm_latte and add a flag * remove layer_norm_latte and add a flag * update latte_pipeline * update latte_pipeline * remove unuse squeeze * add norm_hidden_states.ndim == 2: # for Latte * fixed test latte pipeline bugs * fixed test latte pipeline bugs * delete sh * add doc for latte * add licensing * Move Transformer3DModelOutput to modeling_outputs * give a default value to sample_size * remove the einops dependency * change norm2 for latte * modify pipeline of latte * update test for Latte * modify some codes for latte * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * modify for Latte pipeline * video_length -> num_frames; update prepare_latents copied from * make fix-copies * make style * typo: videe -> video * update * modify for Latte pipeline * modify latte pipeline * modify latte pipeline * modify latte pipeline * modify latte pipeline * modify for Latte pipeline * Delete .vscode directory * make style * make fix-copies * add latte transformer 3d to docs _toctree.yml * update example * reduce frames for test * fixed bug of _text_preprocessing * set num frame to 1 for testing * remove unuse print * add text = self._clean_caption(text) again --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 08 Jul, 2024 1 commit
-
-
PommesPeter authored
--------- Co-authored-by:
zhuole1025 <zhuole1025@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 02 Jul, 2024 1 commit
-
-
YiYi Xu authored
* add * update sd3 controlnet * Update src/diffusers/models/controlnet_sd3.py --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 12 Jun, 2024 1 commit
-
-
Dhruv Nair authored
* up * add sd3 * update * update * add tests * fix copies * fix docs * update * add dreambooth lora * add LoRA * update * update * update * update * import fix * update * Update src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * import fix 2 * update * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * update * update * update * fix ckpt id * fix more ids * update * missing doc * Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update' * fix * update * Update src/diffusers/models/autoencoders/autoencoder_kl.py * Update src/diffusers/models/autoencoders/autoencoder_kl.py * note on gated access. * requirements * licensing --------- Co-authored-by:
sayakpaul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 10 May, 2024 1 commit
-
-
Mark Van Aken authored
* find & replace all FloatTensors to Tensor * apply formatting * Update torch.FloatTensor to torch.Tensor in the remaining files * formatting * Fix the rest of the places where FloatTensor is used as well as in documentation * formatting * Update new file from FloatTensor to Tensor
-
- 02 Apr, 2024 1 commit
-
-
Sayak Paul authored
* remove class assignments for linear and conv. * fix: self.nn
-
- 14 Mar, 2024 1 commit
-
-
M. Tolga Cangöz authored
* Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline` * Fix variable name typo and update comments * Update deprecated `output_type="numpy"` to "np" in test files * Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py * Update test_stable_diffusion_panorama.py * Update numbers in README.md * Update get_guidance_scale_embedding method to use timesteps instead of w * Update number of checkpoints in README.md * Add type hints and fix var name * Fix PyTorch's convention for inplace functions * Fix a typo * Revert "Fix PyTorch's convention for inplace functions" This reverts commit 74350cf65b2c9aa77f08bec7937d7a8b13edb509. * Fix typos * Indent * Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
-
- 13 Mar, 2024 1 commit
-
-
Sayak Paul authored
* fix PyTorch classes and start deprecsation cycles. * remove args crafting for accommodating scale. * remove scale check in feedforward. * assert against nn.Linear and not CompatibleLinear. * remove conv_cls and lineaR_cls. * remove scale *
👋 scale. * fix: unet2dcondition * fix attention.py * fix: attention.py again * fix: unet_2d_blocks. * fix-copies. * more fixes. * fix: resnet.py * more fixes * fix i2vgenxl unet. * depcrecate scale gently. * fix-copies * Apply suggestions from code review Co-authored-by:YiYi Xu <yixu310@gmail.com> * quality * throw warning when scale is passed to the the BasicTransformerBlock class. * remove scale from signature. * cross_attention_kwargs, very nice catch by Yiyi * fix: logger.warn * make deprecation message clearer. * address final comments. * maintain same depcrecation message and also add it to activations. * address yiyi * fix copies * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * more depcrecation * fix-copies --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 08 Mar, 2024 1 commit
-
-
Sayak Paul authored
* throw error when patch inputs and layernorm are provided for transformers2d. * add comment on supported norm_types in transformers2d * more check * fix: norm _type handling
-
- 07 Mar, 2024 1 commit
-
-
Rinne authored
fix: remove duplicate code in TemporalBasicTransformerBlock. Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Feb, 2024 2 commits
-
-
Sayak Paul authored
change to 2024
-
Sayak Paul authored
* attention_head_dim * debug * print more info * correct num_attention_heads behaviour * down_block_num_attention_heads -> num_attention_heads. * correct the image link in doc. * add: deprecation for num_attention_head * fix: test argument to use attention_head_dim * more fixes. * quality * address comments. * remove depcrecation.
-
- 06 Feb, 2024 2 commits
-
-
sayakpaul authored
- 04 Feb, 2024 1 commit
-
-
YiYi Xu authored
fix sd reference community ppeline Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 31 Jan, 2024 1 commit
-
-
Sayak Paul authored
--------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 27 Dec, 2023 1 commit
-
-
Dhruv Nair authored
fix
-
- 21 Dec, 2023 1 commit
-
-
Will Berman authored
amused rename Update docs/source/en/api/pipelines/amused.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> AdaLayerNormContinuous default values custom micro conditioning micro conditioning docs put lookup from codebook in constructor fix conversion script remove manual fused flash attn kernel add training script temp remove training script add dummy gradient checkpointing func clarify temperatures is an instance variable by setting it remove additional SkipFF block args hardcode norm args rename tests folder fix paths and samples fix tests add training script training readme lora saving and loading non-lora saving/loading some readme fixes guards Update docs/source/en/api/pipelines/amused.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> Update examples/amused/README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> Update examples/amused/train_amused.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> vae upcasting add fp16 integration tests use tuple for micro cond copyrights remove casts delegate to torch.nn.LayerNorm move temperature to pipeline call upsampling/downsampling changes
-
- 04 Dec, 2023 1 commit
-
-
takuoko authored
* Support IP-Adapter Plus * fix format * restore before black format * restore before black format * generic * Refactor PerceiverAttention * format * fix test and refactor PerceiverAttention * generic encode_image * keep attention implementation * merge tests * encode_image backward compatible * code quality * fix controlnet inpaint pipeline * refactor FFN * refactor FFN --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 29 Nov, 2023 1 commit
-
-
Suraj Patil authored
* begin model * finish blocks * add_embedding * addition_time_embed_dim * use TimestepEmbedding * fix temporal res block * fix time_pos_embed * fix add_embedding * add conversion script * fix model * up * add new resnet blocks * make forward work * return sample in original shape * fix temb shape in TemporalResnetBlock * add spatio temporal transformers * add vae blocks * fix blocks * update * update * fix shapes in Alphablender and add time activation in res blcok * use new blocks * style * fix temb shape * fix SpatioTemporalResBlock * reuse TemporalBasicTransformerBlock * fix TemporalBasicTransformerBlock * use TransformerSpatioTemporalModel * fix TransformerSpatioTemporalModel * fix time_context dim * clean up * make temb optional * add blocks * rename model * update conversion script * remove UNetMidBlockSpatioTemporal * add in init * remove unused arg * remove unused arg * remove more unsed args * up * up * check for None * update vae * update up/mid blocks for decoder * begin pipeline * adapt scheduler * add guidance scalings * fix norm eps in temporal transformers * add temporal autoencoder * make pipeline run * fix frame decodig * decode in float32 * decode n frames at a time * pass decoding_t to decode_latents * fix decode_latents * vae encode/decode in fp32 * fix dtype in TransformerSpatioTemporalModel * type image_latents same as image_embeddings * allow using differnt eps in temporal block for video decoder * fix default values in vae * pass num frames in decode * switch spatial to temporal for mixing in VAE * fix num frames during split decoding * cast alpha to sample dtype * fix attention in MidBlockTemporalDecoder * fix typo * fix guidance_scales dtype * fix missing activation in TemporalDecoder * skip_post_quant_conv * add vae conversion * style * take guidance scale as input * up * allow passing PIL to export_video * accept fps as arg * add pipeline and vae in init * remove hack * use AutoencoderKLTemporalDecoder * don't scale image latents * add unet tests * clean up unet * clean TransformerSpatioTemporalModel * add slow svd test * clean up * make temb optional in Decoder mid block * fix norm eps in TransformerSpatioTemporalModel * clean up temp decoder * clean up * clean up * use c_noise values for timesteps * use math for log * update * fix copies * doc * upcast vae * update forward pass for gradient checkpointing * make added_time_ids is tensor * up * fix upcasting * remove post quant conv * add _resize_with_antialiasing * fix _compute_padding * cleanup model * more cleanup * more cleanup * more cleanup * remove freeu * remove attn slice * small clean * up * up * remove extra step kwargs * remove eta * remove dropout * remove callback * remove merge factor args * clean * clean up * move to dedicated folder * remove attention_head_dim * docstr and small fix * update unet doc strings * rename decoding_t * correct linting * store c_skip and c_out * cleanup * clean TemporalResnetBlock * more cleanup * clean up vae * clean up * begin doc * more cleanup * up * up * doc * Improve * better naming * better naming * better naming * better naming * better naming * better naming * better naming * better naming * Apply suggestions from code review * Default chunk size to None * add example * Better * Apply suggestions from code review * update doc * Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * style * Get torch compile working * up * rename * fix doc * add chunking * torch compile * torch compile * add modelling outputs * torch compile * Improve chunking * Apply suggestions from code review * Update docs/source/en/using-diffusers/svd.md * Close diff tag * remove slicing * resnet docstr * add docstr in resnet * rename * Apply suggestions from code review * update tests * Fix output type latents * fix more * fix more * Update docs/source/en/using-diffusers/svd.md * fix more * add pipeline tests * remove unused arg * clean up * make sure get_scaling receives tensors * fix euler scheduler * fix get_scalings * simply euler for now * remove old test file * use randn_tensor to create noise * fix device for rand tensor * increase expected_max_difference * fix test_inference_batch_single_identical * actually fix test_inference_batch_single_identical * disable test_save_load_float16 * skip test_float16_inference * skip test_inference_batch_single_identical * fix test_xformers_attention_forwardGenerator_pass * Apply suggestions from code review * update StableVideoDiffusionPipelineSlowTests * update image * add diffusers example * fix more --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com>
-
- 07 Nov, 2023 1 commit
-
-
Dhruv Nair authored
* fix * Update src/diffusers/models/attention.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Nov, 2023 1 commit
-
-
Sayak Paul authored
* init pixart alpha pipeline * fix: import * script * script * script * add: vae to the pipeline * add: vae_scale_factor * add: checkpoint_path * clean conversion script a bit. * size embeddings. * fix: size embedding * update scrip * support for interpolation of position embedding. * support for conditioning. * .. * .. * .. * final layer * final layer * align if encode_prompt * support for caption embedding * refactor * refactor * refactor * start cross attention * start cross attention * cross_attention_dim * cross * cross * support for resolution and aspect_ratio * support for caption projection * refactor patch embeddings * batch_size * up * commit * commit * commit. * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze. * squeeze. * fix final block./ * fix final block./ * fix final block./ * clean * fix: interpolation scale. * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging * debugging * debugging * debugging * debugging * debugging * debugging * make --checkpoint_path non-required. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * remove num_tokens * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * debug * debug * update conversion script. * update conversion script. * update conversion script. * debug * debug * debug * clean * debug * debug * debug * debug * debug * debug * debug * debug * deug * debug * debug * debug * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * clean * fix * fix * boom * boom * some changes * boom * save * up * remove i * fix more tests * DPMSolverMultistepScheduler * fix * offloading * fix conversion script * fix conversion script * remove print * remove support for negative prompt embeds. * typo. * remove extra kwargs * bring conversion script to where it was * fix * trying mu luck * trying my luck again * again * again * again * clean up * up * up * update example * support for 512 * remove spacing * finalize docs. * test debug * fix: assertion values. * debug * debug * debug * fix: repeat * remove prints. * Apply suggestions from code review * Apply suggestions from code review * Correct more * Apply suggestions from code review * Change all * Clean more * fix more * Fix more * Fix more * Correct more * address patrick's comments. * remove unneeded args * clean up pipeline. * sty;e * make the use of additional conditions better conditioned. * None better * dtype * height and width validation * add a note about size brackets. * fix * spit out slow test outputs. * fix? * fix optional test * fix more * remove unneeded comment * debug --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Nov, 2023 1 commit
-
-
Dhruv Nair authored
* draft design * clean up * clean up * clean up * clean up * clean up * clean up * clean up * clean up * clean up * update pipeline * clean up * clean up * clean up * add tests * change motion block * clean up * clean up * clean up * update * update * update * update * update * update * update * update * clean up * update * update * update model test * update * update * update * update * make style * update * fix embeddings * update * merge upstream * max fix copies * fix bug * fix mistake * add docs * update * clean up * update * clean up * clean up * fix docstrings * fix docstrings * update * update * clean up * update
-
- 24 Oct, 2023 1 commit
-
-
Sayak Paul authored
* move out the activations. * move normalization layers. * add doc. * add doc. * fix: paths * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * style --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 13 Oct, 2023 1 commit
-
-
Younes Belkada authored
* v1 * add tests and fix previous failing tests * fix CI * add tests + v1 `PeftLayerScaler` * style * add scale retrieving mechanism system * fix CI * up * up * simple approach --> not same results for some reason * fix issues * fix copies * remove unneeded method * active adapters! * fix merge conflicts * up * up * kohya - test-1 * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix scale * fix copies * add comment * multi adapters * fix tests * oops * v1 faster loading - in progress * Revert "v1 faster loading - in progress" This reverts commit ac925f81321e95fc8168184c3346bf3d75404d5a. * kohya same generation * fix some slow tests * peft integration features for unet lora 1. Support for Multiple ranks/alphas 2. Support for Multiple active adapters 3. Support for enabling/disabling LoRAs * fix `get_peft_kwargs` * Update loaders.py * add some tests * add unfuse tests * fix tests * up * add set adapter from sourab and tests * fix multi adapter tests * style & quality * style * remove comment * fix `adapter_name` issues * fix unet adapter name for sdxl * fix enabling/disabling adapters * fix fuse / unfuse unet * nit * fix * up * fix cpu offloading * fix another slow test * fix another offload test * add more tests * all slow tests pass * style * fix alpha pattern for unet and text encoder * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/models/attention.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * up * up * clarify comment * comments * change comment order * change comment order * stylr & quality * Update tests/lora/test_lora_layers_peft.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix bugs and add tests * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * refactor * suggestion * add break statemebt * add compile tests * move slow tests to peft tests as I modified them * quality * refactor a bit * style * change import * style * fix CI * refactor slow tests one last time * style * oops * oops * oops * final tweak tests * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * comments * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * remove comments * more comments * try * revert * add `safe_merge` tests * add comment * style, comments and run tests in fp16 * add warnings * fix doc test * replace with `adapter_weights` * add `get_active_adapters()` * expose `get_list_adapters` method * better error message * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * style * trigger slow lora tests * fix tests * maybe fix last test * revert * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * move `MIN_PEFT_VERSION` * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * let's not use class variable * fix few nits * change a bit offloading logic * check earlier * rm unneeded block * break long line * return empty list * change logic a bit and address comments * add typehint * remove parenthesis * fix * revert to fp16 in tests * add to gpu * revert to old test * style * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * change indent * Apply suggestions from code review * Apply suggestions from code review --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 11 Oct, 2023 1 commit
-
-
Aryan V S authored
* improvement: add missing typehints and docs to diffusers/models/attention.py * chore: convert doc strings to raw python strings add missing typehints * improvement: add missing typehints and docs to diffusers/models/adapter.py * improvement: add missing typehints and docs to diffusers/models/lora.py * docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * docs: include suggestion by @sayakpaul in src/diffusers/models/adapter.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/models/lora.py --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 11 Sep, 2023 1 commit
-
-
Dhruv Nair authored
* initial commit * move modules to import struct * add dummy objects and _LazyModule * add lazy import to schedulers * clean up unused imports * lazy import on models module * lazy import for schedulers module * add lazy import to pipelines module * lazy import altdiffusion * lazy import audio diffusion * lazy import audioldm * lazy import consistency model * lazy import controlnet * lazy import dance diffusion ddim ddpm * lazy import deepfloyd * lazy import kandinksy * lazy imports * lazy import semantic diffusion * lazy imports * lazy import stable diffusion * move sd output to its own module * clean up * lazy import t2iadapter * lazy import unclip * lazy import versatile and vq diffsuion * lazy import vq diffusion * helper to fetch objects from modules * lazy import sdxl * lazy import txt2vid * lazy import stochastic karras * fix model imports * fix bug * lazy import * clean up * clean up * fixes for tests * fixes for tests * clean up * remove import of torch_utils from utils module * clean up * clean up * fix mistake import statement * dedicated modules for exporting and loading * remove testing utils from utils module * fixes from merge conflicts * Update src/diffusers/pipelines/kandinsky2_2/__init__.py * fix docs * fix alt diffusion copied from * fix check dummies * fix more docs * remove accelerate import from utils module * add type checking * make style * fix check dummies * remove torch import from xformers check * clean up error message * fixes after upstream merges * dummy objects fix * fix tests * remove unused module import --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 04 Sep, 2023 1 commit
-
-
Sayak Paul authored
* throw warning when more than one lora is attempted to be fused. * introduce support of lora scale during fusion. * change test name * changes * change to _lora_scale * lora_scale to call whenever applicable. * debugging * lora_scale additional. * cross_attention_kwargs * lora_scale -> scale. * lora_scale fix * lora_scale in patched projection. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * styling. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * remove unneeded prints. * remove unneeded prints. * assign cross_attention_kwargs. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * clean up. * refactor scale retrieval logic a bit. * fix nonetypw * fix: tests * add more tests * more fixes. * figure out a way to pass lora_scale. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * unify the retrieval logic of lora_scale. * move adjust_lora_scale_text_encoder to lora.py. * introduce dynamic adjustment lora scale support to sd * fix up copies * Empty-Commit * add: test to check fusion equivalence on different scales. * handle lora fusion warning. * make lora smaller * make lora smaller * make lora smaller --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-