- 06 Dec, 2023 3 commits
-
-
Lucain authored
* Harmonize HF environment variables + deprecate use_auth_token * fix import * fix
-
Patrick von Platen authored
* [Euler Discrete] Fix sigma * make style
-
Sayak Paul authored
* debug * from step * print * turn sigma a list * make str * init_noise_sigma * comment * remove prints * feat: introduce fused projections * change to a better name * no grad * device. * device * dtype * okay * print * more print * fix: unbind -> split * fix: qkv >-> k * enable disable * apply attention processor within the method * attn processors * _enable_fused_qkv_projections * remove print * add fused projection to vae * add todos. * add: documentation and cleanups. * add: test for qkv projection fusion. * relax assertions. * relax further * fix: docs * fix-copies * correct error message. * Empty-Commit * better conditioning on disable_fused_qkv_projections * check * check processor * bfloat16 computation. * check latent dtype * style * remove copy temporarily * cast latent to bfloat16 * fix: vae -> self.vae * remove print. * add _change_to_group_norm_32 * comment out stuff that didn't work * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * reflect patrick's suggestions. * fix imports * fix: disable call. * fix more * fix device and dtype * fix conditions. * fix more * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Dec, 2023 1 commit
-
-
YiYi Xu authored
* fix dpm * all scheulers
-
- 29 Nov, 2023 1 commit
-
-
Suraj Patil authored
* begin model * finish blocks * add_embedding * addition_time_embed_dim * use TimestepEmbedding * fix temporal res block * fix time_pos_embed * fix add_embedding * add conversion script * fix model * up * add new resnet blocks * make forward work * return sample in original shape * fix temb shape in TemporalResnetBlock * add spatio temporal transformers * add vae blocks * fix blocks * update * update * fix shapes in Alphablender and add time activation in res blcok * use new blocks * style * fix temb shape * fix SpatioTemporalResBlock * reuse TemporalBasicTransformerBlock * fix TemporalBasicTransformerBlock * use TransformerSpatioTemporalModel * fix TransformerSpatioTemporalModel * fix time_context dim * clean up * make temb optional * add blocks * rename model * update conversion script * remove UNetMidBlockSpatioTemporal * add in init * remove unused arg * remove unused arg * remove more unsed args * up * up * check for None * update vae * update up/mid blocks for decoder * begin pipeline * adapt scheduler * add guidance scalings * fix norm eps in temporal transformers * add temporal autoencoder * make pipeline run * fix frame decodig * decode in float32 * decode n frames at a time * pass decoding_t to decode_latents * fix decode_latents * vae encode/decode in fp32 * fix dtype in TransformerSpatioTemporalModel * type image_latents same as image_embeddings * allow using differnt eps in temporal block for video decoder * fix default values in vae * pass num frames in decode * switch spatial to temporal for mixing in VAE * fix num frames during split decoding * cast alpha to sample dtype * fix attention in MidBlockTemporalDecoder * fix typo * fix guidance_scales dtype * fix missing activation in TemporalDecoder * skip_post_quant_conv * add vae conversion * style * take guidance scale as input * up * allow passing PIL to export_video * accept fps as arg * add pipeline and vae in init * remove hack * use AutoencoderKLTemporalDecoder * don't scale image latents * add unet tests * clean up unet * clean TransformerSpatioTemporalModel * add slow svd test * clean up * make temb optional in Decoder mid block * fix norm eps in TransformerSpatioTemporalModel * clean up temp decoder * clean up * clean up * use c_noise values for timesteps * use math for log * update * fix copies * doc * upcast vae * update forward pass for gradient checkpointing * make added_time_ids is tensor * up * fix upcasting * remove post quant conv * add _resize_with_antialiasing * fix _compute_padding * cleanup model * more cleanup * more cleanup * more cleanup * remove freeu * remove attn slice * small clean * up * up * remove extra step kwargs * remove eta * remove dropout * remove callback * remove merge factor args * clean * clean up * move to dedicated folder * remove attention_head_dim * docstr and small fix * update unet doc strings * rename decoding_t * correct linting * store c_skip and c_out * cleanup * clean TemporalResnetBlock * more cleanup * clean up vae * clean up * begin doc * more cleanup * up * up * doc * Improve * better naming * better naming * better naming * better naming * better naming * better naming * better naming * better naming * Apply suggestions from code review * Default chunk size to None * add example * Better * Apply suggestions from code review * update doc * Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * style * Get torch compile working * up * rename * fix doc * add chunking * torch compile * torch compile * add modelling outputs * torch compile * Improve chunking * Apply suggestions from code review * Update docs/source/en/using-diffusers/svd.md * Close diff tag * remove slicing * resnet docstr * add docstr in resnet * rename * Apply suggestions from code review * update tests * Fix output type latents * fix more * fix more * Update docs/source/en/using-diffusers/svd.md * fix more * add pipeline tests * remove unused arg * clean up * make sure get_scaling receives tensors * fix euler scheduler * fix get_scalings * simply euler for now * remove old test file * use randn_tensor to create noise * fix device for rand tensor * increase expected_max_difference * fix test_inference_batch_single_identical * actually fix test_inference_batch_single_identical * disable test_save_load_float16 * skip test_float16_inference * skip test_inference_batch_single_identical * fix test_xformers_attention_forwardGenerator_pass * Apply suggestions from code review * update StableVideoDiffusionPipelineSlowTests * update image * add diffusers example * fix more --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com>
-
- 27 Nov, 2023 2 commits
-
-
dg845 authored
* Add custom timesteps support to LCMScheduler. * Add custom timesteps support to StableDiffusionPipeline. * Add custom timesteps support to StableDiffusionXLPipeline. * Add custom timesteps support to remaining Stable Diffusion pipelines which support LCMScheduler (img2img, inpaint). * Add custom timesteps support to remaining Stable Diffusion XL pipelines which support LCMScheduler (img2img, inpaint). * Add custom timesteps support to StableDiffusionControlNetPipeline. * Add custom timesteps support to T21 Stable Diffusion (XL) Adapters. * Clean up Stable Diffusion inpaint tests. * Manually add support for custom timesteps to AltDiffusion pipelines since make fix-copies doesn't appear to work correctly (it deletes the whole pipeline). * make style * Refactor pipeline timestep handling into the retrieve_timesteps function.
-
Aryan V S authored
* deprecated: KarrasVeScheduler, ScoreSdeVpScheduler * delete tests relevant to deprecated schedulers * chore: run make style * fix: import error caused due to incorrect _import_structure after deprecation * fix: ScoreSdeVpScheduler was not importable from diffusers * remove import added by assumption * Update src/diffusers/schedulers/__init__.py as suggested by @patrickvonplaten Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * make it a part deprecated * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Fix * fix * fix doc * fix doc....again....... * remove karras_ve test folder Co-Authored-By:
YiYi Xu <yixu310@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com>
-
- 20 Nov, 2023 2 commits
-
-
dg845 authored
* Change LCMScheduler.set_timesteps to pick more evenly spaced inference timesteps. * Change inference_indices implementation to better match previous behavior. * Add num_inference_steps=26 test case to test_inference_steps. * run CI --------- Co-authored-by:patil-suraj <surajp815@gmail.com>
-
Kashif Rasul authored
* ruff format * not need to use doc-builder's black styling as the doc is styled in ruff * make fix-copies * comment * use run_ruff
-
- 14 Nov, 2023 1 commit
-
-
Patrick von Platen authored
-
- 09 Nov, 2023 1 commit
-
-
Will Berman authored
* consistency decoder * rename * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py * uP * Apply suggestions from code review * uP * uP * uP --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 07 Nov, 2023 1 commit
-
-
dg845 authored
* Refactor LCMScheduler.step such that prev_sample == denoised at the last timestep in the schedule. * Make timestep scaling when calculating boundary conditions configurable. * Reparameterize timestep_scaling to be a multiplicative rather than division scaling. * make style * fix dtype conversion * make style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Nov, 2023 1 commit
-
-
Patrick von Platen authored
* [LCM] Clean up implementations * Add all * correct more * correct more * finish * up
-
- 31 Oct, 2023 1 commit
-
-
TimothyAlexisVass authored
-
- 30 Oct, 2023 1 commit
-
-
Cheng Lu authored
* stabilize dpmpp for sdxl by using euler at the final step * add lu's uniform logsnr time steps * add test * fix check_copies * fix tests --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Oct, 2023 1 commit
-
-
dg845 authored
* initial commit for LatentConsistencyModelPipeline and LCMScheduler based on the community pipeline * Add callback and freeu support. * apply suggestions from review * Clean up LCMScheduler * Remove timeindex argument to LCMScheduler.step. * Add support for clipping or thresholding the predicted original sample. * Remove unused methods and arguments in LCMScheduler. * Improve comment about (lack of) negative prompt support. * Change input guidance_scale to match the StableDiffusionPipeline (Imagen) CFG formulation. * Move lcm_origin_steps from pipeline __call__ to LCMScheduler.__init__/config (as origin_steps). * Fix typo when clipping/thresholding in LCMScheduler. * Add some initial LCMScheduler tests. * add type annotations from review * Fix type annotation bug. * Override test_add_noise_device in LCMSchedulerTest since hardcoded timesteps doesn't work under default settings. * Add generator argument pipeline prepare_latents call. * Cast LCMScheduler.timesteps to long in set_timesteps. * Add onestep and multistep full loop scheduler tests. * Set default height/width to None and don't hardcode guidance scale embedding dim. * Add initial LatentConsistencyPipeline fast and slow tests. * Add initial documentation for LatentConsistencyModelPipeline and LCMScheduler. * Make remaining failing fast tests pass. * make style * Make original_inference_steps configurable from pipeline __call__ again. * make style * Remove guidance_rescale arg from pipeline __call__ since LCM currently doesn't support CFG. * Make LCMScheduler defaults match config of LCM_Dreamshaper_v7 checkpoint. * Fix LatentConsistencyPipeline slow tests and add dummy expected slices. * Add checks for original_steps in LCMScheduler.set_timesteps. * make fix-copies * Improve LatentConsistencyModelPipeline docs. * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * Update src/diffusers/schedulers/scheduling_lcm.py * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * finish --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Aryan V S <avs050602@gmail.com>
-
- 16 Oct, 2023 1 commit
-
-
Kashif Rasul authored
* initial script * formatting * prior trainer wip * add efficient_net_encoder * add CLIPTextModel * add prior ema support * optimizer * fix typo * add dataloader * prompt_embeds and image_embeds * intial training loop * fix output_dir * fix add_noise * accelerator check * make effnet_transforms dynamic * fix training loop * add validation logging * use loaded text_encoder * use PreTrainedTokenizerFast * load weigth from pickle * save_model_card * remove unused file * fix typos * save prior pipeilne in its own folder * fix imports * fix pipe_t2i * scale image_embeds * remove snr_gamma * format * initial lora prior training * log_validation and save * initial gradient working * remove save/load hooks * set set_attn_processor on prior_prior * add lora script * typos * use LoraLoaderMixin for prior pipeline * fix usage * make fix-copies * yse repo_id * write_lora_layers is a staitcmethod * use defualts * fix defaults * undo * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/wuerstchen/modeling_wuerstchen_prior.py * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add graident checkpoint support to prior * gradient_checkpointing * formatting * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/train_text_to_image_lora_prior.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/loaders.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/train_text_to_image_prior.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * use default unet and text_encoder * fix test --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 09 Oct, 2023 1 commit
-
-
Jake Vanderplas authored
-
- 03 Oct, 2023 1 commit
-
-
Patrick von Platen authored
-
- 02 Oct, 2023 2 commits
-
-
Patrick von Platen authored
-
Leng Yue authored
* Update Unipc einsum to support 1D and 3D diffusion. * Add unittest * Update unittest & edge case * Fix unittest * Fix testing_utils.py * Fix unittest file --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 29 Sep, 2023 2 commits
-
-
Patrick von Platen authored
-
Seunghyeon Kim authored
* fix ddim inverse scheduler * update test of ddim inverse scheduler * update test of pix2pix_zero * update test of diffedit * fix typo --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 Sep, 2023 1 commit
-
-
Pedro Cuenca authored
* timestep_spacing for FlaxDPMSolverMultistepScheduler * Style
-
- 25 Sep, 2023 3 commits
-
-
Patrick von Platen authored
-
Anh71me authored
* Fix type annotation on Scheduler.from_pretrained * Fix type annotation on PIL.Image
-
Patrick von Platen authored
* [Doc builder] Ensure slow import for doc builder * Apply suggestions from code review * env for doc builder * fix more * [Diffusers] Set import to slow as env variable * fix docs * fix docs * Apply suggestions from code review * Apply suggestions from code review * fix docs * fix docs
-
- 23 Sep, 2023 1 commit
-
-
YiYi Xu authored
* remove to _device() for sigmas * update add_noise to use simgas --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 22 Sep, 2023 1 commit
-
-
Pedro Cuenca authored
* support transformer_layers_per block in flax UNet * add support for text_time additional embeddings to Flax UNet * rename attention layers for VAE * add shape asserts when renaming attention layers * transpose VAE attention layers * add pipeline flax SDXL code [WIP] * continue add pipeline flax SDXL code [WIP] * cleanup * Working on JIT support Fixed prompt embedding shapes so they work in parallel mode. Assuming we always have both text encoders for now, for simplicity. * Fixing embeddings (untested) * Remove spurious line * Shard guidance_scale when jitting. * Decode images * Fix sharding * style * Refiner UNet can be loaded. * Refiner / img2img pipeline * Allow latent outputs from base and latent inputs in refiner This makes it possible to chain base + refiner without having to use the vae decoder in the base model, the vae encoder in the refiner, skipping conversions to/from PIL, and avoiding TPU <-> CPU memory copies. * Adapt to FlaxCLIPTextModelOutput * Update Flax XL pipeline to FlaxCLIPTextModelOutput * make fix-copies * make style * add euler scheduler * Fix import * Fix copies, comment unused code. * Fix SDXL Flax imports * Fix euler discrete begin * improve init import * finish * put discrete euler in init * fix flax euler * Fix more * make style * correct init * correct init * Temporarily remove FlaxStableDiffusionXLImg2ImgPipeline * correct pipelines * finish --------- Co-authored-by:
Martin Müller <martin.muller.me@gmail.com> Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 21 Sep, 2023 1 commit
-
-
YiYi Xu authored
--------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 19 Sep, 2023 1 commit
-
-
YiYi Xu authored
--------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 14 Sep, 2023 1 commit
-
-
YiYi Xu authored
Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 12 Sep, 2023 1 commit
-
-
Patrick von Platen authored
* [Utils] Correct custom init sort * [Utils] Correct custom init sort * [Utils] Correct custom init sort * add type checking * fix custom init sort * fix test * fix tests --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 11 Sep, 2023 1 commit
-
-
Dhruv Nair authored
* initial commit * move modules to import struct * add dummy objects and _LazyModule * add lazy import to schedulers * clean up unused imports * lazy import on models module * lazy import for schedulers module * add lazy import to pipelines module * lazy import altdiffusion * lazy import audio diffusion * lazy import audioldm * lazy import consistency model * lazy import controlnet * lazy import dance diffusion ddim ddpm * lazy import deepfloyd * lazy import kandinksy * lazy imports * lazy import semantic diffusion * lazy imports * lazy import stable diffusion * move sd output to its own module * clean up * lazy import t2iadapter * lazy import unclip * lazy import versatile and vq diffsuion * lazy import vq diffusion * helper to fetch objects from modules * lazy import sdxl * lazy import txt2vid * lazy import stochastic karras * fix model imports * fix bug * lazy import * clean up * clean up * fixes for tests * fixes for tests * clean up * remove import of torch_utils from utils module * clean up * clean up * fix mistake import statement * dedicated modules for exporting and loading * remove testing utils from utils module * fixes from merge conflicts * Update src/diffusers/pipelines/kandinsky2_2/__init__.py * fix docs * fix alt diffusion copied from * fix check dummies * fix more docs * remove accelerate import from utils module * add type checking * make style * fix check dummies * remove torch import from xformers check * clean up error message * fixes after upstream merges * dummy objects fix * fix tests * remove unused module import --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Sep, 2023 1 commit
-
-
Kashif Rasul authored
* initial * initial * added initial convert script for paella vqmodel * initial wuerstchen pipeline * add LayerNorm2d * added modules * fix typo * use model_v2 * embed clip caption amd negative_caption * fixed name of var * initial modules in one place * WuerstchenPriorPipeline * inital shape * initial denoising prior loop * fix output * add WuerstchenPriorPipeline to __init__.py * use the noise ratio in the Prior * try to save pipeline * save_pretrained working * Few additions * add _execution_device * shape is int * fix batch size * fix shape of ratio * fix shape of ratio * fix output dataclass * tests folder * fix formatting * fix float16 + started with generator * Update pipeline_wuerstchen.py * removed vqgan code * add WuerstchenGeneratorPipeline * fix WuerstchenGeneratorPipeline * fix docstrings * fix imports * convert generator pipeline * fix convert * Work on Generator Pipeline. WIP * Pipeline works with our diffuzz code * apply scale factor * removed vqgan.py * use cosine schedule * redo the denoising loop * Update src/diffusers/models/resnet.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * use torch.lerp * use warp-diffusion org * clip_sample=False, * some refactoring * use model_v3_stage_c * c_cond size * use clip-bigG * allow stage b clip to be None * add dummy * würstchen scheduler * minor changes * set clip=None in the pipeline * fix attention mask * add attention_masks to text_encoder * make fix-copies * add back clip * add text_encoder * gen_text_encoder and tokenizer * fix import * updated pipeline test * undo changes to pipeline test * nip * fix typo * fix output name * set guidance_scale=0 and remove diffuze * fix doc strings * make style * nip * removed unused * initial docs * rename * toc * cleanup * remvoe test script * fix-copies * fix multi images * remove dup * remove unused modules * undo changes for debugging * no new line * remove dup conversion script * fix doc string * cleanup * pass default args * dup permute * fix some tests * fix prepare_latents * move Prior class to modules * offload only the text encoder and vqgan * fix resolution calculation for prior * nip * removed testing script * fix shape * fix argument to set_timesteps * do not change .gitignore * fix resolution calculations + readme * resolution calculation fix + readme * small fixes * Add combined pipeline * rename generator -> decoder * Update .gitignore Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * removed efficient_net * create combined WuerstchenPipeline * make arguments consistent with VQ model * fix var names * no need to return text_encoder_hidden_states * add latent_dim_scale to config * split model into its own file * add WuerschenPipeline to docs * remove unused latent_size * register latent_dim_scale * update script * update docstring * use Attention preprocessor * concat with normed input * fix-copies * add docs * fix test * fix style * add to cpu_offloaded_model * updated type * remove 1-line func * updated type * initial decoder test * formatting * formatting * fix autodoc link * num_inference_steps is int * remove comments * fix example in docs * Update src/diffusers/pipelines/wuerstchen/diffnext.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * rename layernorm to WuerstchenLayerNorm * rename DiffNext to WuerstchenDiffNeXt * added comment about MixingResidualBlock * move paella vq-vae to pipelines' folder * initial decoder test * increased test_float16_inference expected diff * self_attn is always true * more passing decoder tests * batch image_embeds * fix failing tests * set the correct dtype * relax inference test * update prior * added combined pipeline test * faster test * faster test * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix issues from review * update wuerstchen.md + change generator name * resolve issues * fix copied from usage and add back batch_size * fix API * fix arguments * fix combined test * Added timesteps argument + fixes * Update tests/pipelines/test_pipelines_common.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/pipelines/wuerstchen/test_wuerstchen_prior.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py * up * Fix more * failing tests * up * up * correct naming * correct docs * correct docs * fix test params * correct docs * fix classifier free guidance * fix classifier free guidance * fix more * fix all * make tests faster --------- Co-authored-by:
Dominic Rampas <d6582533@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Dominic Rampas <61938694+dome272@users.noreply.github.com>
-
- 23 Aug, 2023 1 commit
-
-
YiYi Xu authored
add self.step_index --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Aug, 2023 1 commit
-
-
Dirk Morris authored
* Fix unipc karras sigmas exception - fixes huggingface/diffusers#4580 * Add unipc scheduler tests for karras sigmas
-
- 15 Aug, 2023 1 commit
-
-
Sayak Paul authored
[Pipeline utils] feat: implement push_to_hub for standalone models, schedulers as well as pipelines (#4128) * feat: implement push_to_hub for standalone models. * address PR feedback. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove max_shard_size. * add: support for scheduler push_to_hub * enable push_to_hub support for flax schedulers. * enable push_to_hub for pipelines. * Apply suggestions from code review Co-authored-by:
Lucain <lucainp@gmail.com> * reflect pr feedback. * address another round of deedback. * better handling of kwargs. * add: tests * Apply suggestions from code review Co-authored-by:
Lucain <lucainp@gmail.com> * setting hub staging to False for now. * incorporate staging test as a separate job. Co-authored-by:
ydshieh <2521628+ydshieh@users.noreply.github.com> * fix: tokenizer loading. * fix: json dumping. * move is_staging_test to a better location. * better treatment to tokens. * define repo_id to better handle concurrency * style * explicitly set token * Empty-Commit * move SUER, TOKEN to test * collate org_repo_id * delete repo --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Lucain <lucainp@gmail.com> Co-authored-by:
ydshieh <2521628+ydshieh@users.noreply.github.com>
-
- 09 Aug, 2023 1 commit
-
-
Steven Liu authored
* clean scheduler mixin * up to dpmsolvermultistep * finish cleaning * first draft * fix overview table * apply feedback * update reference code
-
- 18 Jul, 2023 1 commit
-
-
clarencechen authored
* Add Recent Timestep Scheduling Improvements to DDIM Inverse Scheduler Roll timesteps by one to reflect origin-destination semantic discrepancy Restore `set_alpha_to_one` option to handle negative initial timesteps Remove `set_alpha_to_zero` option not used due to previous truncation * Bugfix * Remove unnecessary calls to `detach()` Use `self.image_processor.preprocess` in DiffEdit pipeline functions * Preprocess list input for inverted image latents in diffedit pipeline * Add `timestep_spacing` and `steps_offset` to `DPMSolverMultistepInverseScheduler` * Update expected test results to account for inverting last forward diffusion step * Fix inversion progress bar bug * Add first draft for proper fast tests for DDIMInverseScheduler * Add deprecated DDIMInverseScheduler kwarg to ConfigMixer registry * Fix test failure in DPMMultistepInverseScheduler Invert step specification leads to negative noise variance in SDE-based algs Add first draft for proper fast tests for DPMMultistepInverseScheduler * Update expected test results to account for inverting last forward diffusion step Clean up diffedit fast test
-