- 12 May, 2023 1 commit
-
-
Laureηt authored
* Add `sigmoid` beta scheduler to `DDPMScheduler` docstring * Add `sigmoid` beta scheduler to `RePaintScheduler` docstring --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 11 May, 2023 4 commits
-
-
Patrick von Platen authored
* Improve checkpointing lora * fix more * Improve doc string * Update src/diffusers/loaders.py * make stytle * Apply suggestions from code review * Update src/diffusers/loaders.py * Apply suggestions from code review * Apply suggestions from code review * better * Fix all * Fix multi-GPU dreambooth * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Fix all * make style * make style --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Patrick von Platen authored
Add omegaconfg
-
Stas Bekman authored
* [deepspeed] partial ZeRO-3 support * cleanup * improve deepspeed fixes * Improve * make style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Takuma Mori authored
* add inferring_controlnet_cond_batch * Revert "add inferring_controlnet_cond_batch" This reverts commit abe8d6311d4b7f5b9409ca709c7fabf80d06c1a9. * set guess_mode to True whenever global_pool_conditions is True Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * nit * add integration test --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 10 May, 2023 3 commits
-
-
Patrick von Platen authored
-
Rupert Menneer authored
* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy. * Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution * Added a resolution test to StableDiffusionInpaintPipelineSlowTests this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
* add: a warning message when using xformers in a PT 2.0 env. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 09 May, 2023 3 commits
-
-
Steven Liu authored
* clarify safetensor docstring * fix typo * apply feedback
-
YiYi Xu authored
* add text2img * fix-copies * add * add all other pipelines * add * add * add * add * add * make style * style + fix copies --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
Will Berman authored
* update IF stage I pipelines add fixed variance schedulers and lora loading * added kv lora attn processor * allow loading into alternative lora attn processor * make vae optional * throw away predicted variance * allow loading into added kv lora layer * allow load T5 * allow pre compute text embeddings * set new variance type in schedulers * fix copies * refactor all prompt embedding code class prompts are now included in pre-encoding code max tokenizer length is now configurable embedding attention mask is now configurable * fix for when variance type is not defined on scheduler * do not pre compute validation prompt if not present * add example test for if lora dreambooth * add check for train text encoder and pre compute text embeddings
-
- 08 May, 2023 3 commits
-
-
Steven Liu authored
fix docstring Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
-
pdoane authored
* Batched load of textual inversions - Only call resize_token_embeddings once per batch as it is the most expensive operation - Allow pretrained_model_name_or_path and token to be an optional list - Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function - Add comment that single files (e.g. .pt/.safetensors) are supported - Add comment for token parameter - Convert token override log message from warning to info * Update src/diffusers/loaders.py Check for duplicate tokens Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update condition for None tokens --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 May, 2023 2 commits
- 05 May, 2023 4 commits
-
-
Will Rice authored
The argument `upsample_size` needs to be added to these modules to allow compatibility with other blocks that require this argument.
-
Cheng Lu authored
* add SDE variant of DPM-Solver and DPM-Solver++ * add test * fix typo * fix typo
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 03 May, 2023 2 commits
-
-
Cheng Lu authored
* fix multistep dpmsolver for cosine schedule (deepfloy-if) * fix a typo * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule * add test, fix style --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Mylo authored
Fix missing variable assign lol
-
- 02 May, 2023 1 commit
-
-
Patrick von Platen authored
* Fix more torch compile breaks * add tests * Fix all * fix controlnet * fix more * Add Horace He as co-author. > > Co-authored-by:
Horace He <horacehe2007@yahoo.com> * Add Horace He as co-author. Co-authored-by:
Horace He <horacehe2007@yahoo.com> --------- Co-authored-by:
Horace He <horacehe2007@yahoo.com>
-
- 01 May, 2023 3 commits
-
-
YiYi Xu authored
* refactor img2img VaeImageProcessor.postprocess * remove copy from for init, run_safety_checker, decode_latents Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
yiyixuxu <yixu@yis-macbook-pro.lan> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Patrick von Platen authored
* fix more * Fix more * fix more * Apply suggestions from code review * fix * make style * make fix-copies * fix * make sure torch compile * Clean * fix test
-
Ilia Larchenko authored
A pipeline object stores the results in `images` not in `sample`. Current code blocks don't work.
-
- 28 Apr, 2023 5 commits
-
-
Will Berman authored
The note-seq package throws an error on import because the default installed version of Ipython is not compatible with python 3.8 which we run in the CI. https://github.com/huggingface/diffusers/actions/runs/4830121056/jobs/8605954838#step:7:9
-
Patrick von Platen authored
* Allow disabling torch 2_0 attention * make style * Update src/diffusers/models/attention.py
-
Jason Kuan authored
* add constant lr with rules * add constant with rules in TYPE_TO_SCHEDULER_FUNCTION * add constant lr rate with rule * hotfix code quality * fix doc style * change name constant_with_rules to piecewise constant
-
clarencechen authored
* Update Pix2PixZero Auto-correlation Loss * Add Stable Diffusion DiffEdit pipeline * Add draft documentation and import code * Bugfixes and refactoring * Add option to not decode latents in the inversion process * Harmonize preprocessing * Revert "Update Pix2PixZero Auto-correlation Loss" This reverts commit b218062fed08d6cc164206d6cb852b2b7b00847a. * Update annotations * rename `compute_mask` to `generate_mask` * Update documentation * Update docs * Update Docs * Fix copy * Change shape of output latents to batch first * Update docs * Add first draft for tests * Bugfix and update tests * Add `cross_attention_kwargs` support for all pipeline methods * Fix Copies * Add support for PIL image latents Add support for mask broadcasting Update docs and tests Align `mask` argument to `mask_image` Remove height and width arguments * Enable MPS Tests * Move example docstrings * Fix test * Fix test * fix pipeline inheritance * Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline * Register modules set to `None` in config for `test_save_load_optional_components` * Move fixed logic to specific test class * Clean changes to other pipelines * Update new tests to coordinate with #2953 * Update slow tests for better results * Safety to avoid potential problems with torch.inference_mode * Add reference in SD Pipeline Overview * Fix tests again * Enforce determinism in noise for generate_mask * Fix copies * Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16` * Add LoraLoaderMixin and update `prepare_image_latents` * clean up repeat and reg * bugfix * Remove invalid args from docs Suppress spurious warning by repeating image before latent to mask gen
-
Sayak Paul authored
*
👽 qol improvements for LoRA. * better function name? * fix: LoRA weight loading with the new format. * address Patrick's comments. * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> * change wording around encouraging the use of load_lora_weights(). * fix: function name. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 27 Apr, 2023 6 commits
-
-
Patrick von Platen authored
-
Robert Dargavel Smith authored
* config fixes * deprecate get_input_dims
-
Xie Zejian authored
-
apolinário authored
Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
Isaac authored
* removed unnecessary parameters from get_up_block and get_down_block functions * adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Nipun Jindal authored
* [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * [2064]: Add stochastic sampler * Review comments * [Review comment]: Add is_torchsde_available() * [Review comment]: Test and docs * [Review comment] * [Review comment] * [Review comment] * [Review comment] * [Review comment] --------- Co-authored-by:njindal <njindal@adobe.com>
-
- 26 Apr, 2023 3 commits
-
-
Patrick von Platen authored
* Post release * fix more
-
Patrick von Platen authored
-
Patrick von Platen authored
* Add all files * update * Make sure vae is memory efficient for PT 1 * make style
-