- 09 Nov, 2023 1 commit
-
-
Will Berman authored
* consistency decoder * rename * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py * uP * Apply suggestions from code review * uP * uP * uP --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Nov, 2023 4 commits
-
-
Dhruv Nair authored
* fix prompt bug * add test
-
Sayak Paul authored
* fix mask feature condition. * debug * remove identical test * set correct * Empty-Commit
-
Patrick von Platen authored
* [LCM] Fix img2img * make fix-copies * make fix-copies * make fix-copies * up
-
YiYi Xu authored
skip rendering Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 07 Nov, 2023 4 commits
-
-
dg845 authored
* Refactor LCMScheduler.step such that prev_sample == denoised at the last timestep in the schedule. * Make timestep scaling when calculating boundary conditions configurable. * Reparameterize timestep_scaling to be a multiplicative rather than division scaling. * make style * fix dtype conversion * make style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
* debug * support non-square images * add: test * fix: test --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
* fix embeds * remove todo * add: test * better name
-
Dhruv Nair authored
* fix model xformers test * update
-
- 06 Nov, 2023 1 commit
-
-
Sayak Paul authored
* init pixart alpha pipeline * fix: import * script * script * script * add: vae to the pipeline * add: vae_scale_factor * add: checkpoint_path * clean conversion script a bit. * size embeddings. * fix: size embedding * update scrip * support for interpolation of position embedding. * support for conditioning. * .. * .. * .. * final layer * final layer * align if encode_prompt * support for caption embedding * refactor * refactor * refactor * start cross attention * start cross attention * cross_attention_dim * cross * cross * support for resolution and aspect_ratio * support for caption projection * refactor patch embeddings * batch_size * up * commit * commit * commit. * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze. * squeeze. * fix final block./ * fix final block./ * fix final block./ * clean * fix: interpolation scale. * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging * debugging * debugging * debugging * debugging * debugging * debugging * make --checkpoint_path non-required. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * remove num_tokens * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * debug * debug * update conversion script. * update conversion script. * update conversion script. * debug * debug * debug * clean * debug * debug * debug * debug * debug * debug * debug * debug * deug * debug * debug * debug * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * clean * fix * fix * boom * boom * some changes * boom * save * up * remove i * fix more tests * DPMSolverMultistepScheduler * fix * offloading * fix conversion script * fix conversion script * remove print * remove support for negative prompt embeds. * typo. * remove extra kwargs * bring conversion script to where it was * fix * trying mu luck * trying my luck again * again * again * again * clean up * up * up * update example * support for 512 * remove spacing * finalize docs. * test debug * fix: assertion values. * debug * debug * debug * fix: repeat * remove prints. * Apply suggestions from code review * Apply suggestions from code review * Correct more * Apply suggestions from code review * Change all * Clean more * fix more * Fix more * Fix more * Correct more * address patrick's comments. * remove unneeded args * clean up pipeline. * sty;e * make the use of additional conditions better conditioned. * None better * dtype * height and width validation * add a note about size brackets. * fix * spit out slow test outputs. * fix? * fix optional test * fix more * remove unneeded comment * debug --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 05 Nov, 2023 1 commit
-
-
YiYi Xu authored
* draft1 * update * style * move to the end of loop * update * update callbak_on_step_end_inputs * Revert "update" This reverts commit 5f9b153183d0cde3b850f14024d2e37ae8c19576. * Revert "update callbak_on_step_end_inputs" This reverts commit 44889f4dabad95b7ebb330faa5f1955b5d008c88. * update * update test required_optional_params * remove self.lora_scale * img2img * inpaint * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix * apply feedbacks on img2img + inpaint: keep only important pipeline attributes * depth * pix2pix * make _callback_tensor_inputs an class variable so that we can use it for testing * add a basic tst for callback * add a read-only tensor input timesteps + fix tests * add second test for callback cfg * sdxl * sdxl img2img * sdxl inpaint * kandinsky prior * kandinsky decoder * kandinsky img2img + combined * kandinsky inpaint * fix copies * fix * consistent default inputs * fix copies * wuerstchen_prior prior * test_wuerstchen_decoder + fix test for prior * wuerstchen_combined pipeline + skip tests * skip test for kandinsky combined * lcm * remove timesteps etc * add doc string * copies * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * make style and improve tests * up * up * fix more * fix cfg test * tests for callbacks * fix for real * update * lcm img2img * add doc * add doc page to index --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 03 Nov, 2023 3 commits
-
-
Sayak Paul authored
* support for tiny autoencoder in img2img Co-authored-by:
slep0v <37597789+slep0v@users.noreply.github.com> * copy fix * line space * line space * clean up * spit out expected value * spit out expected value * assertion values. * assertion values. --------- Co-authored-by:
slep0v <37597789+slep0v@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
dg845 authored
* Clean up LCM pipeline and pipeline test code. * Add comment for LCM img2img sampling loop.
-
YiYi Xu authored
fix a bug in `AutoPipeline.from_pipe()` when creating a controlnet pipeline from an existing controlnet (#5638) fix Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 02 Nov, 2023 3 commits
-
-
Patrick von Platen authored
* [LCM] Clean up implementations * Add all * correct more * correct more * finish * up
-
Dhruv Nair authored
* draft design * clean up * clean up * clean up * clean up * clean up * clean up * clean up * clean up * clean up * update pipeline * clean up * clean up * clean up * add tests * change motion block * clean up * clean up * clean up * update * update * update * update * update * update * update * update * clean up * update * update * update model test * update * update * update * update * make style * update * fix embeddings * update * merge upstream * max fix copies * fix bug * fix mistake * add docs * update * clean up * update * clean up * clean up * fix docstrings * fix docstrings * update * update * clean up * update
-
Patrick von Platen authored
* fix more * fix more
-
- 01 Nov, 2023 4 commits
-
-
Younes Belkada authored
* fix civitai bug * add test * up * fix test * added slow test. * style * Update src/diffusers/utils/peft_utils.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/utils/peft_utils.py --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
Patrick von Platen authored
-
ilisparrow authored
* Enable lora for sdxl adapters too. Issue #5516 * fix: assertion values. * Use numpy_cosine_similarity_distance on the arrays Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Use numpy_cosine_similarity_distance on the arrays Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Changed imports orders to pass tests Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by:
Ilias A <iliasamri00@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
clarencechen authored
* Update final model offload for more pipelines Add test to ensure all pipeline components are returned to CPU after execution with model offloading * Add comment to explain early UNet offload in Text-to-Video pipeline * Style
-
- 30 Oct, 2023 1 commit
-
-
Cheng Lu authored
* stabilize dpmpp for sdxl by using euler at the final step * add lu's uniform logsnr time steps * add test * fix check_copies * fix tests --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 Oct, 2023 3 commits
-
-
Patrick von Platen authored
* upload custom remote poc * up * make style * finish * better name * Apply suggestions from code review * Update tests/pipelines/test_pipelines.py * more fixes * remove ipdb * more fixes * fix more * finish tests --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
p1kit authored
Optimize test configurations for faster execution
-
Patrick von Platen authored
* [Tests] Speed up expert of mixture tests * make style
-
- 25 Oct, 2023 1 commit
-
-
YiYi Xu authored
* fix * fix copies * remove heun from tests * add back heun and fix the tests to include 2nd order * fix the other test too * Apply suggestions from code review * Apply suggestions from code review * Apply suggestions from code review * make style * add more comments --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Oct, 2023 2 commits
-
-
dg845 authored
* initial commit for LatentConsistencyModelPipeline and LCMScheduler based on the community pipeline * Add callback and freeu support. * apply suggestions from review * Clean up LCMScheduler * Remove timeindex argument to LCMScheduler.step. * Add support for clipping or thresholding the predicted original sample. * Remove unused methods and arguments in LCMScheduler. * Improve comment about (lack of) negative prompt support. * Change input guidance_scale to match the StableDiffusionPipeline (Imagen) CFG formulation. * Move lcm_origin_steps from pipeline __call__ to LCMScheduler.__init__/config (as origin_steps). * Fix typo when clipping/thresholding in LCMScheduler. * Add some initial LCMScheduler tests. * add type annotations from review * Fix type annotation bug. * Override test_add_noise_device in LCMSchedulerTest since hardcoded timesteps doesn't work under default settings. * Add generator argument pipeline prepare_latents call. * Cast LCMScheduler.timesteps to long in set_timesteps. * Add onestep and multistep full loop scheduler tests. * Set default height/width to None and don't hardcode guidance scale embedding dim. * Add initial LatentConsistencyPipeline fast and slow tests. * Add initial documentation for LatentConsistencyModelPipeline and LCMScheduler. * Make remaining failing fast tests pass. * make style * Make original_inference_steps configurable from pipeline __call__ again. * make style * Remove guidance_rescale arg from pipeline __call__ since LCM currently doesn't support CFG. * Make LCMScheduler defaults match config of LCM_Dreamshaper_v7 checkpoint. * Fix LatentConsistencyPipeline slow tests and add dummy expected slices. * Add checks for original_steps in LCMScheduler.set_timesteps. * make fix-copies * Improve LatentConsistencyModelPipeline docs. * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * Update src/diffusers/schedulers/scheduling_lcm.py * Apply suggestions from code review Co-authored-by:
Aryan V S <avs050602@gmail.com> * finish --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Aryan V S <avs050602@gmail.com>
-
Bowen Bao authored
* Register BaseOutput subclasses as supported torch.utils._pytree nodes * lint --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 23 Oct, 2023 2 commits
-
-
Dhruv Nair authored
fix tests
-
Ryan Dick authored
* Update get_dummy_inputs(...) in T2I-Adapter tests to take image height and width as params. * Update the T2I-Adapter unit tests to run with the standard number of UNet down blocks so that all T2I-Adapter down blocks get exercised. * Update the T2I-Adapter down blocks to better match the padding behavior of the UNet. * Revert "Update the T2I-Adapter unit tests to run with the standard number of UNet down blocks so that all T2I-Adapter down blocks get exercised." This reverts commit 6d4a060a34415ec973a252944216f4fb8b9926cd. * Create utility functions for testing the T2I-Adapter downscaling bahevior. * (minor) Improve readability with an intermediate named variable. * Statically parameterize T2I-Adapter test dimensions rather than generating them dynamically. * Fix static checks. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Oct, 2023 1 commit
-
-
Younes Belkada authored
* fix scale unscale v1 * final fixes + CI * fix slow trst * oops * fix copies * oops * oops * fix * style * fix copies --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Oct, 2023 1 commit
-
-
Vishnu V Jaddipal authored
* Added args, kwargs to ```U * Add UNetMidBlock2D as a supported mid block type * Fix extra init input for UNetMidBlock2D, change allowed types for Mid-block init * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_condition.py * Update unet_2d_blocks.py * Update unet_2d_blocks.py * Update unet_2d_blocks.py * Update unet_2d_condition.py * Update unet_2d_blocks.py * Updated docstring, increased check strictness Updated the docstring for ```UNet2DConditionModel``` to include ```reverse_transformer_layers_per_block``` and updated checking for nested list type ```transformer_layers_per_block``` * Add basic shape-check test for asymmetrical unets * Update src/diffusers/models/unet_2d_blocks.py Removed blank line Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update unet_2d_condition.py Remove blank space * Update unet_2d_condition.py Changed docstring for `mid_block_type` * Fixed docstring and wrong default value * Reformat with black * Reformat with necessary commands * Add UNetMidBlockFlat to versatile_diffusion/modeling_text_unet.py to ensure consistency * Removed args, kwargs, use on mid-block type * Make fix-copies * Update src/diffusers/models/unet_2d_condition.py Wrap into single line Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * make fix-copies --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 17 Oct, 2023 2 commits
-
-
Arka authored
* changed channel parameters for UNET and VAE. Decreased hidden layers size with increased attention heads and intermediate size * changed the assertion check range * clean up --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
Sayak Paul authored
* fix: sdxl pipeline when unet is not available. * fix moe * account for text * ifx more * don't make unet optional. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * split conditionals. * add optional components to sdxl pipeline * propagate changes to the rest of the pipelines. * add: test * add to all * fix: rest of the pipelines. * use pipeline_class variable * separate pipeline mixin * use safe_serialization * fix: test * access actual output. * add: optional test to adapter and ip2p sdxl pipeline tests/ * add optional test to controlnet sdxl. * fix tests * fix ip2p tests * fix more * fifx more. * use np output type. * fix for StableDiffusionXLMultiControlNetPipelineFastTests. * fix: SDXLOptionalComponentsTesterMixin * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix tests * Empty-Commit * revert previous * quality * fix: test --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 13 Oct, 2023 1 commit
-
-
Younes Belkada authored
* v1 * add tests and fix previous failing tests * fix CI * add tests + v1 `PeftLayerScaler` * style * add scale retrieving mechanism system * fix CI * up * up * simple approach --> not same results for some reason * fix issues * fix copies * remove unneeded method * active adapters! * fix merge conflicts * up * up * kohya - test-1 * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix scale * fix copies * add comment * multi adapters * fix tests * oops * v1 faster loading - in progress * Revert "v1 faster loading - in progress" This reverts commit ac925f81321e95fc8168184c3346bf3d75404d5a. * kohya same generation * fix some slow tests * peft integration features for unet lora 1. Support for Multiple ranks/alphas 2. Support for Multiple active adapters 3. Support for enabling/disabling LoRAs * fix `get_peft_kwargs` * Update loaders.py * add some tests * add unfuse tests * fix tests * up * add set adapter from sourab and tests * fix multi adapter tests * style & quality * style * remove comment * fix `adapter_name` issues * fix unet adapter name for sdxl * fix enabling/disabling adapters * fix fuse / unfuse unet * nit * fix * up * fix cpu offloading * fix another slow test * fix another offload test * add more tests * all slow tests pass * style * fix alpha pattern for unet and text encoder * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/models/attention.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * up * up * clarify comment * comments * change comment order * change comment order * stylr & quality * Update tests/lora/test_lora_layers_peft.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix bugs and add tests * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * refactor * suggestion * add break statemebt * add compile tests * move slow tests to peft tests as I modified them * quality * refactor a bit * style * change import * style * fix CI * refactor slow tests one last time * style * oops * oops * oops * final tweak tests * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * comments * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * remove comments * more comments * try * revert * add `safe_merge` tests * add comment * style, comments and run tests in fp16 * add warnings * fix doc test * replace with `adapter_weights` * add `get_active_adapters()` * expose `get_list_adapters` method * better error message * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * style * trigger slow lora tests * fix tests * maybe fix last test * revert * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * move `MIN_PEFT_VERSION` * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * let's not use class variable * fix few nits * change a bit offloading logic * check earlier * rm unneeded block * break long line * return empty list * change logic a bit and address comments * add typehint * remove parenthesis * fix * revert to fp16 in tests * add to gpu * revert to old test * style * Update src/diffusers/loaders.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * change indent * Apply suggestions from code review * Apply suggestions from code review --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 12 Oct, 2023 1 commit
-
-
Dhruv Nair authored
* move xformers to dedicated runner * fix * remove ptl from test runner images
-
- 09 Oct, 2023 4 commits
-
-
Patrick von Platen authored
* Fix fuse Lora * improve a bit * make style * Update src/diffusers/models/lora.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * ciao C file * ciao C file * test & make style --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
__mo_san__ authored
* decrease UNet2DConditionModel & ControlNetModel blocks * decrease UNet2DConditionModel & ControlNetModel blocks * decrease even more blocks & number of norm groups * decrease vae block out channels and n of norm goups * fix code style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sebastian authored
* Reduce number of down block channels * Remove debug code * Set new excepted image slice values for sdxl euler test
-
chuzh authored
Fix [core/GLIGEN]: TypeError when iterating over 0-d tensor with In-painting mode when EulerAncestralDiscreteScheduler is used (#5305) * fix(gligen_inpaint_pipeline):
🐛 Wrap the timestep() 0-d tensor in a list to convert to 1-d tensor. This avoids the TypeError caused by trying to directly iterate over a 0-dimensional tensor in the denoising stage * test(gligen/gligen_text_image): unit test using the EulerAncestralDiscreteScheduler --------- Co-authored-by:zhen-hao.chu <zhen-hao.chu@vitrox.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-