- 18 Dec, 2024 1 commit
-
-
hlky authored
-
- 16 Dec, 2024 1 commit
-
-
hlky authored
use_flow_sigmas copy
-
- 15 Dec, 2024 1 commit
-
-
Junsong Chen authored
[Sana] Add Sana, including `SanaPipeline`, `SanaPAGPipeline`, `LinearAttentionProcessor`, `Flow-based DPM-sovler` and so on. (#9982) * first add a script for DC-AE; * DC-AE init * replace triton with custom implementation * 1. rename file and remove un-used codes; * no longer rely on omegaconf and dataclass * replace custom activation with diffuers activation * remove dc_ae attention in attention_processor.py * iinherit from ModelMixin * inherit from ConfigMixin * dc-ae reduce to one file * update downsample and upsample * clean code * support DecoderOutput * remove get_same_padding and val2tuple * remove autocast and some assert * update ResBlock * remove contents within super().__init__ * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove opsequential * update other blocks to support the removal of build_norm * remove build encoder/decoder project in/out * remove inheritance of RMSNorm2d from LayerNorm * remove reset_parameters for RMSNorm2d Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove device and dtype in RMSNorm2d __init__ Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/dc_ae.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove op_list & build_block * remove build_stage_main * change file name to autoencoder_dc * move LiteMLA to attention.py * align with other vae decode output; * add DC-AE into init files; * update * make quality && make style; * quick push before dgx disappears again * update * make style * update * update * fix * refactor * refactor * refactor * update * possibly change to nn.Linear * refactor * make fix-copies * replace vae with ae * replace get_block_from_block_type to get_block * replace downsample_block_type from Conv to conv for consistency * add scaling factors * incorporate changes for all checkpoints * make style * move mla to attention processor file; split qkv conv to linears * refactor * add tests * from original file loader * add docs * add standard autoencoder methods * combine attention processor * fix tests * update * minor fix * minor fix * minor fix & in/out shortcut rename * minor fix * make style * fix paper link * update docs * update single file loading * make style * remove single file loading support; todo for DN6 * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * add abstract * 1. add DCAE into diffusers; 2. make style and make quality; * add DCAE_HF into diffusers; * bug fixed; * add SanaPipeline, SanaTransformer2D into diffusers; * add sanaLinearAttnProcessor2_0; * first update for SanaTransformer; * first update for SanaPipeline; * first success run SanaPipeline; * model output finally match with original model with the same intput; * code update; * code update; * add a flow dpm-solver scripts *
🎉 [important update] 1. Integrate flow-dpm-sovler into diffusers; 2. finally run successfully on both `FlowMatchEulerDiscreteScheduler` and `FlowDPMSolverMultistepScheduler`; *🎉 🔧 [important update & fix huge bugs!!] 1. add SanaPAGPipeline & several related Sana linear attention operators; 2. `SanaTransformer2DModel` not supports multi-resolution input; 2. fix the multi-scale HW bugs in SanaPipeline and SanaPAGPipeline; 3. fix the flow-dpm-solver set_timestep() init `model_output` and `lower_order_nums` bugs; * remove prints; * add convert sana official checkpoint to diffusers format Safetensor. * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/pag/pipeline_pag_sana.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update Sana for DC-AE's recent commit; * make style && make quality * Add StableDiffusion3PAGImg2Img Pipeline + Fix SD3 Unconditional PAG (#9932) * fix progress bar updates in SD 1.5 PAG Img2Img pipeline --------- Co-authored-by:
Vinh H. Pham <phamvinh257@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * make the vae can be None in `__init__` of `SanaPipeline` * Update src/diffusers/models/transformers/sana_transformer_2d.py Co-authored-by:
hlky <hlky@hlky.ac> * change the ae related code due to the latest update of DCAE branch; * change the ae related code due to the latest update of DCAE branch; * 1. change code based on AutoencoderDC; 2. fix the bug of new GLUMBConv; 3. run success; * update for solving conversation. * 1. fix bugs and run convert script success; 2. Downloading ckpt from hub automatically; * make style && make quality; * 1. remove un-unsed parameters in init; 2. code update; * remove test file * refactor; add docs; add tests; update conversion script * make style * make fix-copies * refactor * udpate pipelines * pag tests and refactor * remove sana pag conversion script * handle weight casting in conversion script * update conversion script * add a processor * 1. add bf16 pth file path; 2. add complex human instruct in pipeline; * fix fast \tests * change gemma-2-2b-it ckpt to a non-gated repo; * fix the pth path bug in conversion script; * change grad ckpt to original; make style * fix the complex_human_instruct bug and typo; * remove dpmsolver flow scheduler * apply review suggestions * change the `FlowMatchEulerDiscreteScheduler` to default `DPMSolverMultistepScheduler` with flow matching scheduler. * fix the tokenizer.padding_side='right' bug; * update docs * make fix-copies * fix imports * fix docs * add integration test * update docs * update examples * fix convert_model_output in schedulers * fix failing tests --------- Co-authored-by:
Junyu Chen <chenjydl2003@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
chenjy2003 <70215701+chenjy2003@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 20 Nov, 2024 1 commit
-
-
hlky authored
* Fix beta and exponential sigmas + add tests --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 30 Sep, 2024 1 commit
-
-
hlky authored
-
- 25 Sep, 2024 1 commit
-
-
hlky authored
-
- 24 May, 2024 1 commit
-
-
Tolga Cangöz authored
Fix grammatical error
-
- 10 May, 2024 1 commit
-
-
Mark Van Aken authored
* find & replace all FloatTensors to Tensor * apply formatting * Update torch.FloatTensor to torch.Tensor in the remaining files * formatting * Fix the rest of the places where FloatTensor is used as well as in documentation * formatting * Update new file from FloatTensor to Tensor
-
- 02 Apr, 2024 1 commit
-
-
Sayak Paul authored
* add: utility to format our docs too
📜 * debugging saga * fix: message * checking * should be fixed. * revert pipeline_fixture * remove empty line * make style * fix: setup.py * style.
-
- 18 Mar, 2024 1 commit
-
-
M. Tolga Cangöz authored
* Fix PyTorch's convention for inplace functions * Fix import structure in __init__.py and update config loading logic in test_config.py * Update configuration access * Fix typos * Trim trailing white spaces * Fix typo in logger name * Revert "Fix PyTorch's convention for inplace functions" This reverts commit f65dc4afcb57ceb43d5d06389229d47bafb10d2d. * Fix typo in step_index property description * Revert "Update configuration access" This reverts commit 8d44e870b8c1ad08802e3e904c34baeca1b598f8. * Revert "Fix import structure in __init__.py and update config loading logic in test_config.py" This reverts commit 2ad5e8bca25aede3b912da22bd57285b598fe171. * Fix typos * Fix typos * Fix typos * Fix a typo: tranform -> transform
-
- 14 Mar, 2024 1 commit
-
-
Beinsezii authored
* Change step_offset scheduler docstrings * Mention it may be needed by some models * More docstrings These ones failed literal S&R because I performed it case-sensitive which is fun. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 01 Feb, 2024 1 commit
-
-
YiYi Xu authored
-
- 30 Jan, 2024 1 commit
-
-
Yunxuan Xiao authored
* load cumprod tensor to device Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com> * fixing ci Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com> * make fix-copies Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com> --------- Signed-off-by:
woshiyyya <xiaoyunxuan1998@gmail.com>
-
- 26 Jan, 2024 1 commit
-
-
Patrick von Platen authored
-
- 22 Jan, 2024 1 commit
-
-
Junsong Chen authored
* add Sa-Solver --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
scxue <xueshuchen17@mails.ucas.edu.cn> Co-authored-by:
jschen <chenjunsong4@h-partners.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com>
-
- 15 Dec, 2023 1 commit
-
-
Patrick von Platen authored
* correct * Apply suggestions from code review * make style
-
- 07 Dec, 2023 1 commit
-
-
YiYi Xu authored
* fix * copies --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 01 Dec, 2023 1 commit
-
-
YiYi Xu authored
* fix dpm * all scheulers
-
- 29 Nov, 2023 1 commit
-
-
Suraj Patil authored
* begin model * finish blocks * add_embedding * addition_time_embed_dim * use TimestepEmbedding * fix temporal res block * fix time_pos_embed * fix add_embedding * add conversion script * fix model * up * add new resnet blocks * make forward work * return sample in original shape * fix temb shape in TemporalResnetBlock * add spatio temporal transformers * add vae blocks * fix blocks * update * update * fix shapes in Alphablender and add time activation in res blcok * use new blocks * style * fix temb shape * fix SpatioTemporalResBlock * reuse TemporalBasicTransformerBlock * fix TemporalBasicTransformerBlock * use TransformerSpatioTemporalModel * fix TransformerSpatioTemporalModel * fix time_context dim * clean up * make temb optional * add blocks * rename model * update conversion script * remove UNetMidBlockSpatioTemporal * add in init * remove unused arg * remove unused arg * remove more unsed args * up * up * check for None * update vae * update up/mid blocks for decoder * begin pipeline * adapt scheduler * add guidance scalings * fix norm eps in temporal transformers * add temporal autoencoder * make pipeline run * fix frame decodig * decode in float32 * decode n frames at a time * pass decoding_t to decode_latents * fix decode_latents * vae encode/decode in fp32 * fix dtype in TransformerSpatioTemporalModel * type image_latents same as image_embeddings * allow using differnt eps in temporal block for video decoder * fix default values in vae * pass num frames in decode * switch spatial to temporal for mixing in VAE * fix num frames during split decoding * cast alpha to sample dtype * fix attention in MidBlockTemporalDecoder * fix typo * fix guidance_scales dtype * fix missing activation in TemporalDecoder * skip_post_quant_conv * add vae conversion * style * take guidance scale as input * up * allow passing PIL to export_video * accept fps as arg * add pipeline and vae in init * remove hack * use AutoencoderKLTemporalDecoder * don't scale image latents * add unet tests * clean up unet * clean TransformerSpatioTemporalModel * add slow svd test * clean up * make temb optional in Decoder mid block * fix norm eps in TransformerSpatioTemporalModel * clean up temp decoder * clean up * clean up * use c_noise values for timesteps * use math for log * update * fix copies * doc * upcast vae * update forward pass for gradient checkpointing * make added_time_ids is tensor * up * fix upcasting * remove post quant conv * add _resize_with_antialiasing * fix _compute_padding * cleanup model * more cleanup * more cleanup * more cleanup * remove freeu * remove attn slice * small clean * up * up * remove extra step kwargs * remove eta * remove dropout * remove callback * remove merge factor args * clean * clean up * move to dedicated folder * remove attention_head_dim * docstr and small fix * update unet doc strings * rename decoding_t * correct linting * store c_skip and c_out * cleanup * clean TemporalResnetBlock * more cleanup * clean up vae * clean up * begin doc * more cleanup * up * up * doc * Improve * better naming * better naming * better naming * better naming * better naming * better naming * better naming * better naming * Apply suggestions from code review * Default chunk size to None * add example * Better * Apply suggestions from code review * update doc * Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * style * Get torch compile working * up * rename * fix doc * add chunking * torch compile * torch compile * add modelling outputs * torch compile * Improve chunking * Apply suggestions from code review * Update docs/source/en/using-diffusers/svd.md * Close diff tag * remove slicing * resnet docstr * add docstr in resnet * rename * Apply suggestions from code review * update tests * Fix output type latents * fix more * fix more * Update docs/source/en/using-diffusers/svd.md * fix more * add pipeline tests * remove unused arg * clean up * make sure get_scaling receives tensors * fix euler scheduler * fix get_scalings * simply euler for now * remove old test file * use randn_tensor to create noise * fix device for rand tensor * increase expected_max_difference * fix test_inference_batch_single_identical * actually fix test_inference_batch_single_identical * disable test_save_load_float16 * skip test_float16_inference * skip test_inference_batch_single_identical * fix test_xformers_attention_forwardGenerator_pass * Apply suggestions from code review * update StableVideoDiffusionPipelineSlowTests * update image * add diffusers example * fix more --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com>
-
- 20 Nov, 2023 1 commit
-
-
Kashif Rasul authored
* ruff format * not need to use doc-builder's black styling as the doc is styled in ruff * make fix-copies * comment * use run_ruff
-
- 31 Oct, 2023 1 commit
-
-
TimothyAlexisVass authored
-
- 03 Oct, 2023 1 commit
-
-
Patrick von Platen authored
-
- 02 Oct, 2023 2 commits
-
-
Patrick von Platen authored
-
Leng Yue authored
* Update Unipc einsum to support 1D and 3D diffusion. * Add unittest * Update unittest & edge case * Fix unittest * Fix testing_utils.py * Fix unittest file --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 23 Sep, 2023 1 commit
-
-
YiYi Xu authored
* remove to _device() for sigmas * update add_noise to use simgas --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 19 Sep, 2023 1 commit
-
-
YiYi Xu authored
--------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Aug, 2023 1 commit
-
-
Dirk Morris authored
* Fix unipc karras sigmas exception - fixes huggingface/diffusers#4580 * Add unipc scheduler tests for karras sigmas
-
- 09 Aug, 2023 1 commit
-
-
Steven Liu authored
* clean scheduler mixin * up to dpmsolvermultistep * finish cleaning * first draft * fix overview table * apply feedback * update reference code
-
- 05 Jul, 2023 1 commit
-
-
Pedro Cuenca authored
* Add timestep_spacing to DDPM, LMSDiscrete, PNDM. * Remove spurious line. * More easy schedulers. * Add `linspace` to DDIM * Noise sigma for `trailing`. * Add timestep_spacing to DEISMultistepScheduler. Not sure the range is the way it was intended. * Fix: remove line used to debug. * Support timestep_spacing in DPMSolverMultistep, DPMSolverSDE, UniPC * Fix: convert to numpy. * Use sched. defaults when instantiating from_config For params not present in the original configuration. This makes it possible to switch pipeline schedulers even if they use different timestep_spacing (or any other param). * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Missing args in DPMSolverMultistep * Test: default args not in config * Style * Fix scheduler name in test * Remove duplicated entries * Add test for solver_type This test currently fails in main. When switching from DEIS to UniPC, solver_type is "logrho" (the default value from DEIS), which gets translated to "bh1" by UniPC. This is different to the default value for UniPC: "bh2". This is where the translation happens: https://github.com/huggingface/diffusers/blob/36d22d0709dc19776e3016fb3392d0f5578b0ab2/src/diffusers/schedulers/scheduling_unipc_multistep.py#L171 * UniPC: use same default for solver_type Fixes a bug when switching from UniPC from another scheduler (i.e., DEIS) that uses a different solver type. The solver is now the same as if we had instantiated the scheduler directly. * do not save use default values * fix more * fix all * fix schedulers * fix more * finish for real * finish for real * flaky tests * Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py * Default steps_offset to 0. * Add missing docstrings * Apply suggestions from code review --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 03 Jul, 2023 1 commit
-
-
Patrick von Platen authored
* Correct controlnet out of list error * Apply suggestions from code review * correct tests * correct tests * fix * test all * Apply suggestions from code review * test all * test all * Apply suggestions from code review * Apply suggestions from code review * fix more tests * Fix more * Apply suggestions from code review * finish * Apply suggestions from code review * Update src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py * finish
-
- 11 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* [Config] Fix config prints and save, load * Only use potential nn.Modules for dtype and device * Correct vae image processor * make sure in_channels is not accessed directly * make sure in channels is only accessed via config * Make sure schedulers only access config attributes * Make sure to access config in SAG * Fix vae processor and make style * add tests * uP * make style * Fix more naming issues * Final fix with vae config * change more
-
- 10 Apr, 2023 3 commits
-
-
William Berman authored
-
William Berman authored
-
Will Berman authored
dynamic threshold sampling bug fix and docs
-
- 09 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* [Schedulers] Correct config changing * uP * add tests
-
- 07 Mar, 2023 1 commit
-
-
clarencechen authored
* Improve dynamic threshold * Update code * Add dynamic threshold to ddim and ddpm * Encapsulate and leverage code copy mechanism Update style * Clean up DDPM/DDIM constructor arguments * add test * also add to unipc --------- Co-authored-by:
Peter Lin <peterlin9863@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Mar, 2023 1 commit
-
-
Patrick von Platen authored
-
- 17 Feb, 2023 1 commit
-
-
Wenliang Zhao authored
* fix typos in the doc * restyle the code
-
- 16 Feb, 2023 1 commit
-
-
Wenliang Zhao authored
* add UniPC scheduler * add the return type to the functions * code quality check * add tests * finish docs --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-