- 14 Apr, 2025 1 commit
-
-
Álvaro Somoza authored
* add * fix-copies
-
- 12 Apr, 2025 1 commit
-
-
Nikita Starodubcev authored
* add flow matching lcm scheduler * stochastic sampling * upscaling for scale-wise generation * Apply style fixes * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 09 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update
-
- 02 Apr, 2025 1 commit
-
-
hlky authored
-
- 21 Mar, 2025 1 commit
-
-
YiYi Xu authored
* add sana-sprint --------- Co-authored-by:
Junsong Chen <cjs1020440147@icloud.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 18 Mar, 2025 1 commit
-
-
Aryan authored
* update --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 15 Feb, 2025 1 commit
-
-
Yuxuan Zhang authored
* init * encode with glm * draft schedule * feat(scheduler): Add CogView scheduler implementation * feat(embeddings): add CogView 2D rotary positional embedding * 1 * Update pipeline_cogview4.py * fix the timestep init and sigma * update latent * draft patch(not work) * fix * [WIP][cogview4]: implement initial CogView4 pipeline Implement the basic CogView4 pipeline structure with the following changes: - Add CogView4 pipeline implementation - Implement DDIM scheduler for CogView4 - Add CogView3Plus transformer architecture - Update embedding models Current limitations: - CFG implementation uses padding for sequence length alignment - Need to verify transformer inference alignment with Megatron TODO: - Consider separate forward passes for condition/uncondition instead of padding approach * [WIP][cogview4][refactor]: Split condition/uncondition forward pass in CogView4 pipeline Split the forward pass for conditional and unconditional predictions in the CogView4 pipeline to match the original implementation. The noise prediction is now done separately for each case before combining them for guidance. However, the results still need improvement. This is a work in progress as the generated images are not yet matching expected quality. * use with -2 hidden state * remove text_projector * 1 * [WIP] Add tensor-reload to align input from transformer block * [WIP] for older glm * use with cogview4 transformers forward twice of u and uc * Update convert_cogview4_to_diffusers.py * remove this * use main example * change back * reset * setback * back * back 4 * Fix qkv conversion logic for CogView4 to Diffusers format * back5 * revert to sat to cogview4 version * update a new convert from megatron * [WIP][cogview4]: implement CogView4 attention processor Add CogView4AttnProcessor class for implementing scaled dot-product attention with rotary embeddings for the CogVideoX model. This processor concatenates encoder and hidden states, applies QKV projections and RoPE, but does not include spatial normalization. TODO: - Fix incorrect QKV projection weights - Resolve ~25% error in RoPE implementation compared to Megatron * [cogview4] implement CogView4 transformer block Implement CogView4 transformer block following the Megatron architecture: - Add multi-modulate and multi-gate mechanisms for adaptive layer normalization - Implement dual-stream attention with encoder-decoder structure - Add feed-forward network with GELU activation - Support rotary position embeddings for image tokens The implementation follows the original CogView4 architecture while adapting it to work within the diffusers framework. * with new attn * [bugfix] fix dimension mismatch in CogView4 attention * [cogview4][WIP]: update final normalization in CogView4 transformer Refactored the final normalization layer in CogView4 transformer to use separate layernorm and AdaLN operations instead of combined AdaLayerNormContinuous. This matches the original implementation but needs validation. Needs verification against reference implementation. * 1 * put back * Update transformer_cogview4.py * change time_shift * Update pipeline_cogview4.py * change timesteps * fix * change text_encoder_id * [cogview4][rope] align RoPE implementation with Megatron - Implement apply_rope method in attention processor to match Megatron's implementation - Update position embeddings to ensure compatibility with Megatron-style rotary embeddings - Ensure consistent rotary position encoding across attention layers This change improves compatibility with Megatron-based models and provides better alignment with the original implementation's positional encoding approach. * [cogview4][bugfix] apply silu activation to time embeddings in CogView4 Applied silu activation to time embeddings before splitting into conditional and unconditional parts in CogView4Transformer2DModel. This matches the original implementation and helps ensure correct time conditioning behavior. * [cogview4][chore] clean up pipeline code - Remove commented out code and debug statements - Remove unused retrieve_timesteps function - Clean up code formatting and documentation This commit focuses on code cleanup in the CogView4 pipeline implementation, removing unnecessary commented code and improving readability without changing functionality. * [cogview4][scheduler] Implement CogView4 scheduler and pipeline * now It work * add timestep * batch * change convert scipt * refactor pt. 1; make style * refactor pt. 2 * refactor pt. 3 * add tests * make fix-copies * update toctree.yml * use flow match scheduler instead of custom * remove scheduling_cogview.py * add tiktoken to test dependencies * Update src/diffusers/models/embeddings.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * apply suggestions from review * use diffusers apply_rotary_emb * update flow match scheduler to accept timesteps * fix comment * apply review sugestions * Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> --------- Co-authored-by:
三洋三洋 <1258009915@qq.com> Co-authored-by:
OleehyO <leehy0357@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 13 Feb, 2025 1 commit
-
-
Aryan authored
update
-
- 12 Feb, 2025 1 commit
-
-
hlky authored
Fix `use_lu_lambdas` and `use_karras_sigmas` with `beta_schedule=squaredcos_cap_v2` in `DPMSolverMultistepScheduler` (#10740)
-
- 07 Feb, 2025 1 commit
-
-
hlky authored
-
- 27 Jan, 2025 1 commit
-
-
Giuseppe Catalano authored
Co-authored-by:Giuseppe Catalano <giuseppelorenzo.catalano@unito.it>
-
- 26 Jan, 2025 1 commit
-
-
Jacob Helwig authored
Sigmoid scheduler in scheduling_ddpm.py docs
-
- 16 Jan, 2025 1 commit
-
-
hlky authored
* use np.int32 in scheduling * test_add_noise_device * -np.int32, fixes
-
- 11 Jan, 2025 1 commit
-
-
andreabosisio authored
Correcting a typo in the table number of a referenced paper (in scheduling_ddim_inverse.py) Changed the number of the referenced table from 1 to 2 in a comment of the set_timesteps() method of the DDIMInverseScheduler class (also according to the description of the 'timestep_spacing' attribute of its __init__ method).
-
- 09 Jan, 2025 1 commit
-
-
Steven Liu authored
* fix docstrings * add
-
- 18 Dec, 2024 1 commit
-
-
hlky authored
-
- 17 Dec, 2024 1 commit
-
-
hlky authored
-
- 16 Dec, 2024 2 commits
- 15 Dec, 2024 1 commit
-
-
Junsong Chen authored
[Sana] Add Sana, including `SanaPipeline`, `SanaPAGPipeline`, `LinearAttentionProcessor`, `Flow-based DPM-sovler` and so on. (#9982) * first add a script for DC-AE; * DC-AE init * replace triton with custom implementation * 1. rename file and remove un-used codes; * no longer rely on omegaconf and dataclass * replace custom activation with diffuers activation * remove dc_ae attention in attention_processor.py * iinherit from ModelMixin * inherit from ConfigMixin * dc-ae reduce to one file * update downsample and upsample * clean code * support DecoderOutput * remove get_same_padding and val2tuple * remove autocast and some assert * update ResBlock * remove contents within super().__init__ * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove opsequential * update other blocks to support the removal of build_norm * remove build encoder/decoder project in/out * remove inheritance of RMSNorm2d from LayerNorm * remove reset_parameters for RMSNorm2d Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove device and dtype in RMSNorm2d __init__ Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove op_list & build_block * remove build_stage_main * change file name to autoencoder_dc * move LiteMLA to attention.py * align with other vae decode output; * add DC-AE into init files; * update * make quality && make style; * quick push before dgx disappears again * update * make style * update * update * fix * refactor * refactor * refactor * update * possibly change to nn.Linear * refactor * make fix-copies * replace vae with ae * replace get_block_from_block_type to get_block * replace downsample_block_type from Conv to conv for consistency * add scaling factors * incorporate changes for all checkpoints * make style * move mla to attention processor file; split qkv conv to linears * refactor * add tests * from original file loader * add docs * add standard autoencoder methods * combine attention processor * fix tests * update * minor fix * minor fix * minor fix & in/out shortcut rename * minor fix * make style * fix paper link * update docs * update single file loading * make style * remove single file loading support; todo for DN6 * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * add abstract * 1. add DCAE into diffusers; 2. make style and make quality; * add DCAE_HF into diffusers; * bug fixed; * add SanaPipeline, SanaTransformer2D into diffusers; * add sanaLinearAttnProcessor2_0; * first update for SanaTransformer; * first update for SanaPipeline; * first success run SanaPipeline; * model output finally match with original model with the same intput; * code update; * code update; * add a flow dpm-solver scripts *
🎉 [important update] 1. Integrate flow-dpm-sovler into diffusers; 2. finally run successfully on both `FlowMatchEulerDiscreteScheduler` and `FlowDPMSolverMultistepScheduler`; *🎉 🔧 [important update & fix huge bugs!!] 1. add SanaPAGPipeline & several related Sana linear attention operators; 2. `SanaTransformer2DModel` not supports multi-resolution input; 2. fix the multi-scale HW bugs in SanaPipeline and SanaPAGPipeline; 3. fix the flow-dpm-solver set_timestep() init `model_output` and `lower_order_nums` bugs; * remove prints; * add convert sana official checkpoint to diffusers format Safetensor. * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/pag/pipeline_pag_sana.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update Sana for DC-AE's recent commit; * make style && make quality * Add StableDiffusion3PAGImg2Img Pipeline + Fix SD3 Unconditional PAG (#9932) * fix progress bar updates in SD 1.5 PAG Img2Img pipeline --------- Co-authored-by:
Vinh H. Pham <phamvinh257@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * make the vae can be None in `__init__` of `SanaPipeline` * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
hlky <hlky@hlky.ac> * change the ae related code due to the latest update of DCAE branch; * change the ae related code due to the latest update of DCAE branch; * 1. change code based on AutoencoderDC; 2. fix the bug of new GLUMBConv; 3. run success; * update for solving conversation. * 1. fix bugs and run convert script success; 2. Downloading ckpt from hub automatically; * make style && make quality; * 1. remove un-unsed parameters in init; 2. code update; * remove test file * refactor; add docs; add tests; update conversion script * make style * make fix-copies * refactor * udpate pipelines * pag tests and refactor * remove sana pag conversion script * handle weight casting in conversion script * update conversion script * add a processor * 1. add bf16 pth file path; 2. add complex human instruct in pipeline; * fix fast \tests * change gemma-2-2b-it ckpt to a non-gated repo; * fix the pth path bug in conversion script; * change grad ckpt to original; make style * fix the complex_human_instruct bug and typo; * remove dpmsolver flow scheduler * apply review suggestions * change the `FlowMatchEulerDiscreteScheduler` to default `DPMSolverMultistepScheduler` with flow matching scheduler. * fix the tokenizer.padding_side='right' bug; * update docs * make fix-copies * fix imports * fix docs * add integration test * update docs * update examples * fix convert_model_output in schedulers * fix failing tests --------- Co-authored-by:
Junyu Chen <chenjydl2003@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
chenjy2003 <70215701+chenjy2003@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 12 Dec, 2024 1 commit
-
-
Aryan authored
* transformer * make style & make fix-copies * transformer * add transformer tests * 80% vae * make style * make fix-copies * fix * undo cogvideox changes * update * update * match vae * add docs * t2v pipeline working; scheduler needs to be checked * docs * add pipeline test * update * update * make fix-copies * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update * copy t2v to i2v pipeline * update * apply review suggestions * update * make style * remove framewise encoding/decoding * pack/unpack latents * image2video * update * make fix-copies * update * update * rope scale fix * debug layerwise code * remove debug * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * propagate precision changes to i2v pipeline * remove downcast * address review comments * fix comment * address review comments * [Single File] LTX support for loading original weights (#10135) * from original file mixin for ltx * undo config mapping fn changes * update * add single file to pipelines * update docs * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py * rename classes based on ltx review * point to original repository for inference * make style * resolve conflicts correctly --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 03 Dec, 2024 3 commits
-
-
Anand Kumar authored
[Bug fix] "previous_timestep()" in DDPM scheduling compatible with "trailing" and "linspace" options (#9384) * Update scheduling_ddpm.py * fix copies --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
StAlKeR7779 authored
* Fix wrong output on 3n-1 steps count * Add sde handling to 3 order * make * copies --------- Co-authored-by:hlky <hlky@hlky.ac>
-
hlky authored
-
- 28 Nov, 2024 1 commit
-
-
hlky authored
Add beta, exponential and karras sigmas to FlowMatchEuler
-
- 20 Nov, 2024 1 commit
-
-
hlky authored
* Fix beta and exponential sigmas + add tests --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 05 Nov, 2024 1 commit
-
-
Aryan authored
* update * udpate * update transformer * make style * fix * add conversion script * update * fix * update * fix * update * fixes * make style * update * update * update * init * update * update * add * up * up * up * update * mochi transformer * remove original implementation * make style * update inits * update conversion script * docs * Update src/diffusers/pipelines/mochi/pipeline_mochi.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/pipelines/mochi/pipeline_mochi.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * fix docs * pipeline fixes * make style * invert sigmas in scheduler; fix pipeline * fix pipeline num_frames * flip proj and gate in swiglu * make style * fix * make style * fix tests * latent mean and std fix * update * cherry-pick 1069d210e1b9e84a366cdc7a13965626ea258178 * remove additional sigma already handled by flow match scheduler * fix * remove hardcoded value * replace conv1x1 with linear * Update src/diffusers/pipelines/mochi/pipeline_mochi.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * framewise decoding and conv_cache * make style * Apply suggestions from code review * mochi vae encoder changes * rebase correctly * Update scripts/convert_mochi_to_diffusers.py * fix tests * fixes * make style * update * make style * update * add framewise and tiled encoding * make style * make original vae implementation behaviour the default; note: framewise encoding does not work * remove framewise encoding implementation due to presence of attn layers * fight test 1 * fight test 2 --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 21 Oct, 2024 1 commit
-
-
YiYi Xu authored
fix Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Oct, 2024 3 commits
-
-
hlky authored
* Slight performance improvement to Euler * Slight performance improvement to EDMEuler * Slight performance improvement to FlowMatchHeun * Slight performance improvement to KDPM2Ancestral * Update KDPM2AncestralDiscreteSchedulerTest --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
hlky authored
Refactor SchedulerOutput and add pred_original_sample in `DPMSolverSDE`, `Heun`, `KDPM2Ancestral` and `KDPM2` (#9650) Refactor SchedulerOutput and add pred_original_sample Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
hlky authored
Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 30 Sep, 2024 1 commit
-
-
hlky authored
-
- 25 Sep, 2024 2 commits
- 23 Sep, 2024 1 commit
-
-
hlky authored
* exponential sigmas * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * make style --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 11 Sep, 2024 1 commit
-
-
dianyo authored
migrate the BrownianTree to BrownianInterval Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 07 Aug, 2024 1 commit
-
-
zR authored
* add CogVideoX --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 01 Aug, 2024 1 commit
-
-
Sayak Paul authored
add flux! Signed-off-by:
Adrien <adrien@huggingface.co> Co-authored-by:
Adrien <adrien.69740@gmail.com> Co-authored-by:
Anatoly Belikov <abelikov@singularitynet.io> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 30 Jul, 2024 1 commit
-
-
Yoach Lacombe authored
* WIP modeling code and pipeline * add custom attention processor + custom activation + add to init * correct ProjectionModel forward * add stable audio to __initèè * add autoencoder and update pipeline and modeling code * add half Rope * add partial rotary v2 * add temporary modfis to scheduler * add EDM DPM Solver * remove TODOs * clean GLU * remove att.group_norm to attn processor * revert back src/diffusers/schedulers/scheduling_dpmsolver_multistep.py * refactor GLU -> SwiGLU * remove redundant args * add channel multiples in autoencoder docstrings * changes in docsrtings and copyright headers * clean pipeline * further cleaning * remove peft and lora and fromoriginalmodel * Delete src/diffusers/pipelines/stable_audio/diffusers.code-workspace * make style * dummy models * fix copied from * add fast oobleck tests * add brownian tree * oobleck autoencoder slow tests * remove TODO * fast stable audio pipeline tests * add slow tests * make style * add first version of docs * wrap is_torchsde_available to the scheduler * fix slow test * test with input waveform * add input waveform * remove some todos * create stableaudio gaussian projection + make style * add pipeline to toctree * fix copied from * make quality * refactor timestep_features->time_proj * refactor joint_attention_kwargs->cross_attention_kwargs * remove forward_chunk * move StableAudioDitModel to transformers folder * correct convert + remove partial rotary embed * apply suggestions from yiyixuxu -> removing attn.kv_heads * remove temb * remove cross_attention_kwargs * further removal of cross_attention_kwargs * remove text encoder autocast to fp16 * continue removing autocast * make style * refactor how text and audio are embedded * add paper * update example code * make style * unify projection model forward + fix device placement * make style * remove fuse qkv * apply suggestions from review * Update src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * make style * smaller models in fast tests * pass sequential offloading fast tests * add docs for vae and autoencoder * make style and update example * remove useless import * add cosine scheduler * dummy classes * cosine scheduler docs * better description of scheduler --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 20 Jul, 2024 1 commit
-
-
Pierre Chapuis authored
-