- 17 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* Better deprecation message * Better deprecation message * Better doc string * Fixes * fix more * fix more * Improve __getattr__ * correct more * fix more * fix * Improve more * more improvements * fix more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * make style * Fix all rest & add tests & remove old deprecation fns --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 14 Apr, 2023 1 commit
-
-
Will Berman authored
add custom timesteps test add custom timesteps descending order check docs timesteps -> custom_timesteps can only pass one of num_inference_steps and timesteps
-
- 12 Apr, 2023 1 commit
-
-
Nipun Jindal authored
* [2737]: Add Karras DPMSolverMultistepScheduler * [2737]: Add Karras DPMSolverMultistepScheduler * Add test * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix: repo consistency. * remove Copied from statement from the set_timestep method. * fix: test * Empty commit. Co-authored-by:
njindal <njindal@adobe.com> --------- Co-authored-by:
njindal <njindal@adobe.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 11 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* [Config] Fix config prints and save, load * Only use potential nn.Modules for dtype and device * Correct vae image processor * make sure in_channels is not accessed directly * make sure in channels is only accessed via config * Make sure schedulers only access config attributes * Make sure to access config in SAG * Fix vae processor and make style * add tests * uP * make style * Fix more naming issues * Final fix with vae config * change more
-
- 10 Apr, 2023 7 commits
-
-
William Berman authored
-
William Berman authored
-
William Berman authored
-
William Berman authored
-
William Berman authored
-
William Berman authored
-
Will Berman authored
dynamic threshold sampling bug fix and docs
-
- 06 Apr, 2023 2 commits
-
-
FurryPotato authored
Co-authored-by:
wangguan <dizhipeng.dzp@alibaba-inc.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Nipun Jindal authored
* [2905]: Add Karras pattern to discrete euler * [2905]: Add Karras pattern to discrete euler * Review comments * Review comments * Review comments * Review comments --------- Co-authored-by:njindal <njindal@adobe.com>
-
- 14 Mar, 2023 1 commit
-
-
clarencechen authored
* Add support for different model prediction types in DDIMInverseScheduler Resolve alpha_prod_t_prev index issue for final step of inversion * Fix old bug introduced when prediction type is "sample" * Add support for sample clipping for numerical stability and deprecate old kwarg * Detach sample, alphas, betas Derive predicted noise from model output before dist. regularization Style cleanup * Log loss for debugging * Revert "Log loss for debugging" This reverts commit 76ea9c856f99f4c8eca45a0b1801593bb982584b. * Add comments * Add inversion equivalence test * Add expected data for Pix2PixZero pipeline tests with SD 2 * Update tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py * Remove cruft and add more explanatory comments --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 10 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* [From pretrained] Speed-up loading from cache * up * Fix more * fix one more bug * make style * bigger refactor * factor out function * Improve more * better * deprecate return cache folder * clean up * improve tests * up * upload * add nice tests * simplify * finish * correct * fix version * rename * Apply suggestions from code review Co-authored-by:
Lucain <lucainp@gmail.com> * rename * correct doc string * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * apply code suggestions * finish --------- Co-authored-by:
Lucain <lucainp@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 09 Mar, 2023 2 commits
-
-
Peter Lin authored
Improve ddim scheduler Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* [Schedulers] Correct config changing * uP * add tests
-
- 07 Mar, 2023 1 commit
-
-
clarencechen authored
* Improve dynamic threshold * Update code * Add dynamic threshold to ddim and ddpm * Encapsulate and leverage code copy mechanism Update style * Clean up DDPM/DDIM constructor arguments * add test * also add to unipc --------- Co-authored-by:
Peter Lin <peterlin9863@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Mar, 2023 1 commit
-
-
Patrick von Platen authored
-
- 17 Feb, 2023 2 commits
-
-
Patrick von Platen authored
* add * finish * add tests * add tests * up * up * pull from main * uP * Apply suggestions from code review * finish * Update docs/source/en/_toctree.yml Co-authored-by:
Suraj Patil <surajp815@gmail.com> * finish * clean docs * next * next * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up * up --------- Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Wenliang Zhao authored
* fix typos in the doc * restyle the code
-
- 16 Feb, 2023 3 commits
-
-
Wenliang Zhao authored
* add UniPC scheduler * add the return type to the functions * code quality check * add tests * finish docs --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Suraj Patil authored
reset cur_model_output
-
Will Berman authored
-
- 15 Feb, 2023 1 commit
-
-
Will Berman authored
-
- 14 Feb, 2023 1 commit
-
-
Will Berman authored
* pipeline_variant * Add docs for when clip_stats_path is specified * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * prepare_latents # Copied from re: @patrickvonplaten * NoiseAugmentor->ImageNormalizer * stable_unclip_prior default to None re: @patrickvonplaten * prepare_prior_extra_step_kwargs * prior denoising scale model input * {DDIM,DDPM}Scheduler -> KarrasDiffusionSchedulers re: @patrickvonplaten * docs * Update docs/source/en/api/pipelines/stable_unclip.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 08 Feb, 2023 1 commit
-
-
Will Berman authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 07 Feb, 2023 2 commits
-
-
Patrick von Platen authored
* before running make style * remove left overs from flake8 * finish * make fix-copies * final fix * more fixes
-
YiYi Xu authored
* Modify UNet2DConditionModel - allow skipping mid_block - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size` - allow user to set dimension for the timestep embedding (`time_embed_dim`) - the kernel_size for `conv_in` and `conv_out` is now configurable - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj` - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))` - added 2 arguments `attn1_types` and `attn2_types` * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the `BasicTransformerBlock` block with 2 cross-attention , otherwise we get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention; so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block; note that I stil kept the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks - the position of downsample layer and upsample layer is now configurable - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support this use case - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step inside cross attention block add up/down blocks for k-upscaler modify CrossAttention class - make the `dropout` layer in `to_out` optional - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states - `attention_dropout`: add an optional dropout on attention score adapt BasicTransformerBlock - add an ada groupnorm layer to conditioning attention input with timestep embedding - allow skipping the FeedForward layer in between the attentions - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration update timestep embedding: add new act_fn gelu and an optional act_2 modified ResnetBlock2D - refactored with AdaGroupNorm class (the timestep scale shift normalization) - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv - add option to use input AdaGroupNorm on the input instead of groupnorm - add options to add a dropout layer after each conv - allow user to set the bias in conv_shortcut (needed for k-upscaler) - add gelu adding conversion script for k-upscaler unet add pipeline * fix attention mask * fix a typo * fix a bug * make sure model can be used with GPU * make pipeline work with fp16 * fix an error in BasicTransfomerBlock * make style * fix typo * some more fixes * uP * up * correct more * some clean-up * clean time proj * up * uP * more changes * remove the upcast_attention=True from unet config * remove attn1_types, attn2_types etc * fix * revert incorrect changes up/down samplers * make style * remove outdated files * Apply suggestions from code review * attention refactor * refactor cross attention * Apply suggestions from code review * update * up * update * Apply suggestions from code review * finish * Update src/diffusers/models/cross_attention.py * more fixes * up * up * up * finish * more corrections of conversion state * act_2 -> act_2_fn * remove dropout_after_conv from ResnetBlock2D * make style * simplify KAttentionBlock * add fast test for latent upscaler pipeline * add slow test * slow test fp16 * make style * add doc string for pipeline_stable_diffusion_latent_upscale * add api doc page for latent upscaler pipeline * deprecate attention mask * clean up embeddings * simplify resnet * up * clean up resnet * up * correct more * up * up * improve a bit more * correct more * more clean-ups * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add docstrings for new unet config * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * # Copied from * encode the image if not latent * remove force casting vae to fp32 * fix * add comments about preconditioning parameters from k-diffusion paper * attn1_type, attn2_type -> add_self_attention * clean up get_down_block and get_up_block * fix * fixed a typo(?) in ada group norm * update slice attention processer for cross attention * update slice * fix fast test * update the checkpoint * finish tests * fix-copies * fix-copy for modeling_text_unet.py * make style * make style * fix f-string * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix import * correct changes * fix resnet * make fix-copies * correct euler scheduler * add missing #copied from for preprocess * revert * fix * fix copies * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/cross_attention.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * clean up conversion script * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D * more * Update src/diffusers/models/unet_2d_condition.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove prepare_extra_step_kwargs * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix a typo in timestep embedding * remove num_image_per_prompt * fix fasttest * make style + fix-copies * fix * fix xformer test * fix style * doc string * make style * fix-copies * docstring for time_embedding_norm * make style * final finishes * make fix-copies * fix tests --------- Co-authored-by:
yiyixuxu <yixu@yis-macbook-pro.lan> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 05 Feb, 2023 1 commit
-
-
psychedelicious authored
Needed to convert `timesteps` to `float32` a bit sooner. Fixes #1537
-
- 04 Feb, 2023 1 commit
-
-
Pedro Cuenca authored
Make `key` optional so default pipelines don't fail.
-
- 03 Feb, 2023 1 commit
-
-
Dudu Moshe authored
scheduling_ddpm: fix variance in the case of learned_range type. In the case of learned_range variance type, there are missing logs and exponent comparing to the theory (see "Improved Denoising Diffusion Probabilistic Models" section 3.1 equation 15: https://arxiv.org/pdf/2102.09672.pdf).
-
- 31 Jan, 2023 1 commit
-
-
Dudu Moshe authored
scheduling_ddpm: fix evaluate with lower timesteps count than train. Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 27 Jan, 2023 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 25 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Bump version] 0.13 * Bump model up * up
-
- 19 Jan, 2023 1 commit
-
-
Joqsan authored
fix typos and minor redundancies Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 17 Jan, 2023 1 commit
-
-
Kashif Rasul authored
* added dit model * import * initial pipeline * initial convert script * initial pipeline * make style * raise valueerror * single function * rename classes * use DDIMScheduler * timesteps embedder * samples to cpu * fix var names * fix numpy type * use timesteps class for proj * fix typo * fix arg name * flip_sin_to_cos and better var names * fix C shape cal * make style * remove unused imports * cleanup * add back patch_size * initial dit doc * typo * Update docs/source/api/pipelines/dit.mdx Co-authored-by:
Suraj Patil <surajp815@gmail.com> * added copyright license headers * added example usage and toc * fix variable names asserts * remove comment * added docs * fix typo * upstream changes * set proper device for drop_ids * added initial dit pipeline test * update docs * fix imports * make fix-copies * isort * fix imports * get rid of more magic numbers * fix code when guidance is off * remove block_kwargs * cleanup script * removed to_2tuple * use FeedForward class instead of another MLP * style * work on mergint DiTBlock with BasicTransformerBlock * added missing final_dropout and args to BasicTransformerBlock * use norm from block * fix arg * remove unused arg * fix call to class_embedder * use timesteps * make style * attn_output gets multiplied * removed commented code * use Transformer2D * use self.is_input_patches * fix flags * fixed conversion to use Transformer2DModel * fixes for pipeline * remove dit.py * fix timesteps device * use randn_tensor and fix fp16 inf. * timesteps_emb already the right dtype * fix dit test class * fix test and style * fix norm2 usage in vq-diffusion * added author names to pipeline and lmagenet labels link * fix tests * use norm_type as string * rename dit to transformer * fix name * fix test * set norm_type = "layer" by default * fix tests * do not skip common tests * Update src/diffusers/models/attention.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * revert AdaLayerNorm API * fix norm_type name * make sure all components are in eval mode * revert norm2 API * compact * finish deprecation * add slow tests * remove @ * refactor some stuff * upload * Update src/diffusers/pipelines/dit/pipeline_dit.py * finish more * finish docs * improve docs * finish docs Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Jan, 2023 2 commits
-
-
Will Berman authored
re: https://github.com/huggingface/diffusers/issues/1857 We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1). - [x] add mixin to UnCLIPPipelineFastTests - [x] add mixin to UnCLIPImageVariationPipelineFastTests - [x] Move UnCLIPPipeline flags in mixin to base class - [x] Small MPS fixes for F.pad and F.interpolate - [x] Made test unCLIP model's dimensions smaller to run tests faster
-
Patrick von Platen authored
-