1. 06 Dec, 2023 3 commits
    • Lucain's avatar
      Harmonize HF environment variables + deprecate use_auth_token (#6066) · 75ada250
      Lucain authored
      * Harmonize HF environment variables + deprecate use_auth_token
      
      * fix import
      
      * fix
      75ada250
    • Patrick von Platen's avatar
      [Euler Discrete] Fix sigma (#6078) · 2243a594
      Patrick von Platen authored
      * [Euler Discrete] Fix sigma
      
      * make style
      2243a594
    • Sayak Paul's avatar
      [feat] allow SDXL pipeline to run with fused QKV projections (#6030) · a2bc2e14
      Sayak Paul authored
      
      
      * debug
      
      * from step
      
      * print
      
      * turn sigma a list
      
      * make str
      
      * init_noise_sigma
      
      * comment
      
      * remove prints
      
      * feat: introduce fused projections
      
      * change to a better name
      
      * no grad
      
      * device.
      
      * device
      
      * dtype
      
      * okay
      
      * print
      
      * more print
      
      * fix: unbind -> split
      
      * fix: qkv >-> k
      
      * enable disable
      
      * apply attention processor within the method
      
      * attn processors
      
      * _enable_fused_qkv_projections
      
      * remove print
      
      * add fused projection to vae
      
      * add todos.
      
      * add: documentation and cleanups.
      
      * add: test for qkv projection fusion.
      
      * relax assertions.
      
      * relax further
      
      * fix: docs
      
      * fix-copies
      
      * correct error message.
      
      * Empty-Commit
      
      * better conditioning on disable_fused_qkv_projections
      
      * check
      
      * check processor
      
      * bfloat16 computation.
      
      * check latent dtype
      
      * style
      
      * remove copy temporarily
      
      * cast latent to bfloat16
      
      * fix: vae -> self.vae
      
      * remove print.
      
      * add _change_to_group_norm_32
      
      * comment out stuff that didn't work
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * reflect patrick's suggestions.
      
      * fix imports
      
      * fix: disable call.
      
      * fix more
      
      * fix device and dtype
      
      * fix conditions.
      
      * fix more
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      a2bc2e14
  2. 01 Dec, 2023 1 commit
  3. 29 Nov, 2023 1 commit
    • Suraj Patil's avatar
      Add SVD (#5895) · 63f767ef
      Suraj Patil authored
      
      
      * begin model
      
      * finish blocks
      
      * add_embedding
      
      * addition_time_embed_dim
      
      * use TimestepEmbedding
      
      * fix temporal res block
      
      * fix time_pos_embed
      
      * fix add_embedding
      
      * add conversion script
      
      * fix model
      
      * up
      
      * add new resnet blocks
      
      * make forward work
      
      * return sample in original shape
      
      * fix temb shape in TemporalResnetBlock
      
      * add spatio temporal transformers
      
      * add vae blocks
      
      * fix blocks
      
      * update
      
      * update
      
      * fix shapes in Alphablender and add time activation in res blcok
      
      * use new blocks
      
      * style
      
      * fix temb shape
      
      * fix SpatioTemporalResBlock
      
      * reuse TemporalBasicTransformerBlock
      
      * fix TemporalBasicTransformerBlock
      
      * use TransformerSpatioTemporalModel
      
      * fix TransformerSpatioTemporalModel
      
      * fix time_context dim
      
      * clean up
      
      * make temb optional
      
      * add blocks
      
      * rename model
      
      * update conversion script
      
      * remove UNetMidBlockSpatioTemporal
      
      * add in init
      
      * remove unused arg
      
      * remove unused arg
      
      * remove more unsed args
      
      * up
      
      * up
      
      * check for None
      
      * update vae
      
      * update up/mid blocks for decoder
      
      * begin pipeline
      
      * adapt scheduler
      
      * add guidance scalings
      
      * fix norm eps in temporal transformers
      
      * add temporal autoencoder
      
      * make pipeline run
      
      * fix frame decodig
      
      * decode in float32
      
      * decode n frames at a time
      
      * pass decoding_t to decode_latents
      
      * fix decode_latents
      
      * vae encode/decode in fp32
      
      * fix dtype in TransformerSpatioTemporalModel
      
      * type image_latents same as image_embeddings
      
      * allow using differnt eps in temporal block for video decoder
      
      * fix default values in vae
      
      * pass num frames in decode
      
      * switch spatial to temporal for mixing in VAE
      
      * fix num frames during split decoding
      
      * cast alpha to sample dtype
      
      * fix attention in MidBlockTemporalDecoder
      
      * fix typo
      
      * fix guidance_scales dtype
      
      * fix missing activation in TemporalDecoder
      
      * skip_post_quant_conv
      
      * add vae conversion
      
      * style
      
      * take guidance scale as input
      
      * up
      
      * allow passing PIL to export_video
      
      * accept fps as arg
      
      * add pipeline and vae in init
      
      * remove hack
      
      * use AutoencoderKLTemporalDecoder
      
      * don't scale image latents
      
      * add unet tests
      
      * clean up unet
      
      * clean TransformerSpatioTemporalModel
      
      * add slow svd test
      
      * clean up
      
      * make temb optional in Decoder mid block
      
      * fix norm eps in TransformerSpatioTemporalModel
      
      * clean up temp decoder
      
      * clean up
      
      * clean up
      
      * use c_noise values for timesteps
      
      * use math for log
      
      * update
      
      * fix copies
      
      * doc
      
      * upcast vae
      
      * update forward pass for gradient checkpointing
      
      * make added_time_ids is tensor
      
      * up
      
      * fix upcasting
      
      * remove post quant conv
      
      * add _resize_with_antialiasing
      
      * fix _compute_padding
      
      * cleanup model
      
      * more cleanup
      
      * more cleanup
      
      * more cleanup
      
      * remove freeu
      
      * remove attn slice
      
      * small clean
      
      * up
      
      * up
      
      * remove extra step kwargs
      
      * remove eta
      
      * remove dropout
      
      * remove callback
      
      * remove merge factor args
      
      * clean
      
      * clean up
      
      * move to dedicated folder
      
      * remove attention_head_dim
      
      * docstr and small fix
      
      * update unet doc strings
      
      * rename decoding_t
      
      * correct linting
      
      * store c_skip and c_out
      
      * cleanup
      
      * clean TemporalResnetBlock
      
      * more cleanup
      
      * clean up vae
      
      * clean up
      
      * begin doc
      
      * more cleanup
      
      * up
      
      * up
      
      * doc
      
      * Improve
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * Apply suggestions from code review
      
      * Default chunk size to None
      
      * add example
      
      * Better
      
      * Apply suggestions from code review
      
      * update doc
      
      * Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * style
      
      * Get torch compile working
      
      * up
      
      * rename
      
      * fix doc
      
      * add chunking
      
      * torch compile
      
      * torch compile
      
      * add modelling outputs
      
      * torch compile
      
      * Improve chunking
      
      * Apply suggestions from code review
      
      * Update docs/source/en/using-diffusers/svd.md
      
      * Close diff tag
      
      * remove slicing
      
      * resnet docstr
      
      * add docstr in resnet
      
      * rename
      
      * Apply suggestions from code review
      
      * update tests
      
      * Fix output type latents
      
      * fix more
      
      * fix more
      
      * Update docs/source/en/using-diffusers/svd.md
      
      * fix more
      
      * add pipeline tests
      
      * remove unused arg
      
      * clean  up
      
      * make sure get_scaling receives tensors
      
      * fix euler scheduler
      
      * fix get_scalings
      
      * simply euler for now
      
      * remove old test file
      
      * use randn_tensor to create noise
      
      * fix device for rand tensor
      
      * increase expected_max_difference
      
      * fix test_inference_batch_single_identical
      
      * actually fix test_inference_batch_single_identical
      
      * disable test_save_load_float16
      
      * skip test_float16_inference
      
      * skip test_inference_batch_single_identical
      
      * fix test_xformers_attention_forwardGenerator_pass
      
      * Apply suggestions from code review
      
      * update StableVideoDiffusionPipelineSlowTests
      
      * update image
      
      * add diffusers example
      
      * fix more
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      63f767ef
  4. 27 Nov, 2023 2 commits
    • dg845's avatar
      Add Custom Timesteps Support to LCMScheduler and Supported Pipelines (#5874) · 67d07074
      dg845 authored
      * Add custom timesteps support to LCMScheduler.
      
      * Add custom timesteps support to StableDiffusionPipeline.
      
      * Add custom timesteps support to StableDiffusionXLPipeline.
      
      * Add custom timesteps support to remaining Stable Diffusion pipelines which support LCMScheduler (img2img, inpaint).
      
      * Add custom timesteps support to remaining Stable Diffusion XL pipelines which support LCMScheduler (img2img, inpaint).
      
      * Add custom timesteps support to StableDiffusionControlNetPipeline.
      
      * Add custom timesteps support to T21 Stable Diffusion (XL) Adapters.
      
      * Clean up Stable Diffusion inpaint tests.
      
      * Manually add support for custom timesteps to AltDiffusion pipelines since make fix-copies doesn't appear to work correctly (it deletes the whole pipeline).
      
      * make style
      
      * Refactor pipeline timestep handling into the retrieve_timesteps function.
      67d07074
    • Aryan V S's avatar
      Deprecate KarrasVeScheduler and ScoreSdeVpScheduler (#5269) · 9c357bda
      Aryan V S authored
      
      
      * deprecated: KarrasVeScheduler, ScoreSdeVpScheduler
      
      * delete tests relevant to deprecated schedulers
      
      * chore: run make style
      
      * fix: import error caused due to incorrect _import_structure after deprecation
      
      * fix: ScoreSdeVpScheduler was not importable from diffusers
      
      * remove import added by assumption
      
      * Update src/diffusers/schedulers/__init__.py as suggested by @patrickvonplaten
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make it a part deprecated
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix
      
      * fix
      
      * fix doc
      
      * fix doc....again.......
      
      * remove karras_ve test folder
      Co-Authored-By: default avatarYiYi Xu <yixu310@gmail.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      9c357bda
  5. 20 Nov, 2023 2 commits
  6. 14 Nov, 2023 1 commit
  7. 09 Nov, 2023 1 commit
  8. 07 Nov, 2023 1 commit
    • dg845's avatar
      Improve LCMScheduler (#5681) · aab6de22
      dg845 authored
      
      
      * Refactor LCMScheduler.step such that prev_sample == denoised at the last timestep in the schedule.
      
      * Make timestep scaling when calculating boundary conditions configurable.
      
      * Reparameterize timestep_scaling to be a multiplicative rather than division scaling.
      
      * make style
      
      * fix dtype conversion
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      aab6de22
  9. 02 Nov, 2023 1 commit
  10. 31 Oct, 2023 1 commit
  11. 30 Oct, 2023 1 commit
  12. 24 Oct, 2023 1 commit
    • dg845's avatar
      Add Latent Consistency Models Pipeline (#5448) · 958e17da
      dg845 authored
      
      
      * initial commit for LatentConsistencyModelPipeline and LCMScheduler based on the community pipeline
      
      * Add callback and freeu support.
      
      * apply suggestions from review
      
      * Clean up LCMScheduler
      
      * Remove timeindex argument to LCMScheduler.step.
      
      * Add support for clipping or thresholding the predicted original sample.
      
      * Remove unused methods and arguments in LCMScheduler.
      
      * Improve comment about (lack of) negative prompt support.
      
      * Change input guidance_scale to match the StableDiffusionPipeline (Imagen) CFG formulation.
      
      * Move lcm_origin_steps from pipeline __call__ to LCMScheduler.__init__/config (as origin_steps).
      
      * Fix typo when clipping/thresholding in LCMScheduler.
      
      * Add some initial LCMScheduler tests.
      
      * add type annotations from review
      
      * Fix type annotation bug.
      
      * Override test_add_noise_device in LCMSchedulerTest since hardcoded timesteps doesn't work under default settings.
      
      * Add generator argument pipeline prepare_latents call.
      
      * Cast LCMScheduler.timesteps to long in set_timesteps.
      
      * Add onestep and multistep full loop scheduler tests.
      
      * Set default height/width to None and don't hardcode guidance scale embedding dim.
      
      * Add initial LatentConsistencyPipeline fast and slow tests.
      
      * Add initial documentation for LatentConsistencyModelPipeline and LCMScheduler.
      
      * Make remaining failing fast tests pass.
      
      * make style
      
      * Make original_inference_steps configurable from pipeline __call__ again.
      
      * make style
      
      * Remove guidance_rescale arg from pipeline __call__ since LCM currently doesn't support CFG.
      
      * Make LCMScheduler defaults match config of LCM_Dreamshaper_v7 checkpoint.
      
      * Fix LatentConsistencyPipeline slow tests and add dummy expected slices.
      
      * Add checks for original_steps in LCMScheduler.set_timesteps.
      
      * make fix-copies
      
      * Improve LatentConsistencyModelPipeline docs.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan V S <avs050602@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan V S <avs050602@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan V S <avs050602@gmail.com>
      
      * Update src/diffusers/schedulers/scheduling_lcm.py
      
      * Apply suggestions from code review
      Co-authored-by: default avatarAryan V S <avs050602@gmail.com>
      
      * finish
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarAryan V S <avs050602@gmail.com>
      958e17da
  13. 16 Oct, 2023 1 commit
  14. 09 Oct, 2023 1 commit
  15. 03 Oct, 2023 1 commit
  16. 02 Oct, 2023 2 commits
  17. 29 Sep, 2023 2 commits
  18. 26 Sep, 2023 1 commit
  19. 25 Sep, 2023 3 commits
  20. 23 Sep, 2023 1 commit
  21. 22 Sep, 2023 1 commit
    • Pedro Cuenca's avatar
      SDXL flax (#4254) · 3651b14c
      Pedro Cuenca authored
      
      
      * support transformer_layers_per block in flax UNet
      
      * add support for text_time additional embeddings to Flax UNet
      
      * rename attention layers for VAE
      
      * add shape asserts when renaming attention layers
      
      * transpose VAE attention layers
      
      * add pipeline flax SDXL code [WIP]
      
      * continue add pipeline flax SDXL code [WIP]
      
      * cleanup
      
      * Working on JIT support
      
      Fixed prompt embedding shapes so they work in parallel mode. Assuming we
      always have both text encoders for now, for simplicity.
      
      * Fixing embeddings (untested)
      
      * Remove spurious line
      
      * Shard guidance_scale when jitting.
      
      * Decode images
      
      * Fix sharding
      
      * style
      
      * Refiner UNet can be loaded.
      
      * Refiner / img2img pipeline
      
      * Allow latent outputs from base and latent inputs in refiner
      
      This makes it possible to chain base + refiner without having to use the
      vae decoder in the base model, the vae encoder in the refiner, skipping
      conversions to/from PIL, and avoiding TPU <-> CPU memory copies.
      
      * Adapt to FlaxCLIPTextModelOutput
      
      * Update Flax XL pipeline to FlaxCLIPTextModelOutput
      
      * make fix-copies
      
      * make style
      
      * add euler scheduler
      
      * Fix import
      
      * Fix copies, comment unused code.
      
      * Fix SDXL Flax imports
      
      * Fix euler discrete begin
      
      * improve init import
      
      * finish
      
      * put discrete euler in init
      
      * fix flax euler
      
      * Fix more
      
      * make style
      
      * correct init
      
      * correct init
      
      * Temporarily remove FlaxStableDiffusionXLImg2ImgPipeline
      
      * correct pipelines
      
      * finish
      
      ---------
      Co-authored-by: default avatarMartin Müller <martin.muller.me@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3651b14c
  22. 21 Sep, 2023 1 commit
  23. 19 Sep, 2023 1 commit
  24. 14 Sep, 2023 1 commit
  25. 12 Sep, 2023 1 commit
  26. 11 Sep, 2023 1 commit
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016
  27. 06 Sep, 2023 1 commit
    • Kashif Rasul's avatar
      Würstchen model (#3849) · 541bb6ee
      Kashif Rasul authored
      
      
      * initial
      
      * initial
      
      * added initial convert script for paella vqmodel
      
      * initial wuerstchen pipeline
      
      * add LayerNorm2d
      
      * added modules
      
      * fix typo
      
      * use model_v2
      
      * embed clip caption amd negative_caption
      
      * fixed name of var
      
      * initial modules in one place
      
      * WuerstchenPriorPipeline
      
      * inital shape
      
      * initial denoising prior loop
      
      * fix output
      
      * add WuerstchenPriorPipeline to __init__.py
      
      * use the noise ratio in the Prior
      
      * try to save pipeline
      
      * save_pretrained working
      
      * Few additions
      
      * add _execution_device
      
      * shape is int
      
      * fix batch size
      
      * fix shape of ratio
      
      * fix shape of ratio
      
      * fix output dataclass
      
      * tests folder
      
      * fix formatting
      
      * fix float16 + started with generator
      
      * Update pipeline_wuerstchen.py
      
      * removed vqgan code
      
      * add WuerstchenGeneratorPipeline
      
      * fix WuerstchenGeneratorPipeline
      
      * fix docstrings
      
      * fix imports
      
      * convert generator pipeline
      
      * fix convert
      
      * Work on Generator Pipeline. WIP
      
      * Pipeline works with our diffuzz code
      
      * apply scale factor
      
      * removed vqgan.py
      
      * use cosine schedule
      
      * redo the denoising loop
      
      * Update src/diffusers/models/resnet.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * use torch.lerp
      
      * use warp-diffusion org
      
      * clip_sample=False,
      
      * some refactoring
      
      * use model_v3_stage_c
      
      * c_cond size
      
      * use clip-bigG
      
      * allow stage b clip to be None
      
      * add dummy
      
      * würstchen scheduler
      
      * minor changes
      
      * set clip=None in the pipeline
      
      * fix attention mask
      
      * add attention_masks to text_encoder
      
      * make fix-copies
      
      * add back clip
      
      * add text_encoder
      
      * gen_text_encoder and tokenizer
      
      * fix import
      
      * updated pipeline test
      
      * undo changes to pipeline test
      
      * nip
      
      * fix typo
      
      * fix output name
      
      * set guidance_scale=0 and remove diffuze
      
      * fix doc strings
      
      * make style
      
      * nip
      
      * removed unused
      
      * initial docs
      
      * rename
      
      * toc
      
      * cleanup
      
      * remvoe test script
      
      * fix-copies
      
      * fix multi images
      
      * remove dup
      
      * remove unused modules
      
      * undo changes for debugging
      
      * no  new line
      
      * remove dup conversion script
      
      * fix doc string
      
      * cleanup
      
      * pass default args
      
      * dup permute
      
      * fix some tests
      
      * fix prepare_latents
      
      * move Prior class to modules
      
      * offload only the text encoder and vqgan
      
      * fix resolution calculation for prior
      
      * nip
      
      * removed testing script
      
      * fix shape
      
      * fix argument to set_timesteps
      
      * do not change .gitignore
      
      * fix resolution calculations + readme
      
      * resolution calculation fix + readme
      
      * small fixes
      
      * Add combined pipeline
      
      * rename generator -> decoder
      
      * Update .gitignore
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * removed efficient_net
      
      * create combined WuerstchenPipeline
      
      * make arguments consistent with VQ model
      
      * fix var names
      
      * no need to return text_encoder_hidden_states
      
      * add latent_dim_scale to config
      
      * split model into its own file
      
      * add WuerschenPipeline to docs
      
      * remove unused latent_size
      
      * register latent_dim_scale
      
      * update script
      
      * update docstring
      
      * use Attention preprocessor
      
      * concat with normed input
      
      * fix-copies
      
      * add docs
      
      * fix test
      
      * fix style
      
      * add to cpu_offloaded_model
      
      * updated type
      
      * remove 1-line func
      
      * updated type
      
      * initial decoder test
      
      * formatting
      
      * formatting
      
      * fix autodoc link
      
      * num_inference_steps is int
      
      * remove comments
      
      * fix example in docs
      
      * Update src/diffusers/pipelines/wuerstchen/diffnext.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * rename layernorm to WuerstchenLayerNorm
      
      * rename DiffNext to WuerstchenDiffNeXt
      
      * added comment about MixingResidualBlock
      
      * move paella vq-vae to pipelines' folder
      
      * initial decoder test
      
      * increased test_float16_inference expected diff
      
      * self_attn is always true
      
      * more passing decoder tests
      
      * batch image_embeds
      
      * fix failing tests
      
      * set the correct dtype
      
      * relax inference test
      
      * update prior
      
      * added combined pipeline test
      
      * faster test
      
      * faster test
      
      * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix issues from review
      
      * update wuerstchen.md + change generator name
      
      * resolve issues
      
      * fix copied from usage and add back batch_size
      
      * fix API
      
      * fix arguments
      
      * fix combined test
      
      * Added timesteps argument + fixes
      
      * Update tests/pipelines/test_pipelines_common.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update tests/pipelines/wuerstchen/test_wuerstchen_prior.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
      
      * up
      
      * Fix more
      
      * failing tests
      
      * up
      
      * up
      
      * correct naming
      
      * correct docs
      
      * correct docs
      
      * fix test params
      
      * correct docs
      
      * fix classifier free guidance
      
      * fix classifier free guidance
      
      * fix more
      
      * fix all
      
      * make tests faster
      
      ---------
      Co-authored-by: default avatarDominic Rampas <d6582533@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarDominic Rampas <61938694+dome272@users.noreply.github.com>
      541bb6ee
  28. 23 Aug, 2023 1 commit
  29. 16 Aug, 2023 1 commit
  30. 15 Aug, 2023 1 commit
  31. 09 Aug, 2023 1 commit
    • Steven Liu's avatar
      [docs] Clean scheduler api (#4204) · 16ad13b6
      Steven Liu authored
      * clean scheduler mixin
      
      * up to dpmsolvermultistep
      
      * finish cleaning
      
      * first draft
      
      * fix overview table
      
      * apply feedback
      
      * update reference code
      16ad13b6
  32. 18 Jul, 2023 1 commit
    • clarencechen's avatar
      Add Recent Timestep Scheduling Improvements to DDIM Inverse Scheduler (#3865) · c6e56e92
      clarencechen authored
      * Add Recent Timestep Scheduling Improvements to DDIM Inverse Scheduler
      
      Roll timesteps by one to reflect origin-destination semantic discrepancy
      
      Restore `set_alpha_to_one` option to handle negative initial timesteps
      
      Remove `set_alpha_to_zero` option not used due to previous truncation
      
      * Bugfix
      
      * Remove unnecessary calls to `detach()`
      
      Use `self.image_processor.preprocess` in DiffEdit pipeline functions
      
      * Preprocess list input for inverted image latents in diffedit pipeline
      
      * Add `timestep_spacing` and `steps_offset` to `DPMSolverMultistepInverseScheduler`
      
      * Update expected test results to account for inverting last forward diffusion step
      
      * Fix inversion progress bar bug
      
      * Add first draft for proper fast tests for DDIMInverseScheduler
      
      * Add deprecated DDIMInverseScheduler kwarg to ConfigMixer registry
      
      * Fix test failure in DPMMultistepInverseScheduler
      
      Invert step specification leads to negative noise variance in SDE-based algs
      
      Add first draft for proper fast tests for DPMMultistepInverseScheduler
      
      * Update expected test results to account for inverting last forward diffusion step
      
      Clean up diffedit fast test
      c6e56e92