"docs/vscode:/vscode.git/clone" did not exist on "e58ad6dd66413ef34585348cdbac1664da391fa9"
- 28 Mar, 2023 21 commits
-
-
M. Tolga Cangöz authored
Fix typos
-
M. Tolga Cangöz authored
Fix typos
-
M. Tolga Cangöz authored
Fix typos
-
Sayak Paul authored
[Tests] Adds a test to check if `image_embeds` None case is handled properly in `StableUnCLIPImg2ImgPipeline` (#2861) * improve stable unclip doc. * add: test to check if image_emebds None case is handled. * apply formatting/
-
Nipun Jindal authored
Co-authored-by:njindal <njindal@adobe.com>
-
dg845 authored
Add warning in __init__ if user loads a checkpoint with pipeline.unet.config.in_channels other than 9.
-
Felix Blanke authored
Add last_epoch arg to optimization.get_scheduler. Allows the specification of the index of the last epoch when resuming training.
-
dg845 authored
* Change the docs to use the parent DiffusionPipeline class when loading a checkpoint using from_pretrained() instead of a child class (e.g. StableDiffusionPipeline) where possible. * Run make style to fix style issues. * Change more docs to use DiffusionPipeline rather than a subclass. --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
John HU authored
Fix link to LoRA training guide
-
cmdr2 authored
Update the legacy inpainting SD pipeline, to allow calling it with only prompt_embeds (instead of always requiring a prompt) (#2842) Fix error 'required positional argument: prompt' when Legacy Inpaint is called only with prompt_embeds
-
Li-Huai (Allan) Lin authored
* Remove duplicate sentence * format
-
Sandeep authored
* Remove suggestion to use cuDNN benchmark in docs * removing the wrong line
-
Aki Sakurai authored
-
junhsss authored
-
Stax124 authored
* Allow user to disable SafetyChecker and enable dtypes if loading models from .ckpt or .safetensors * Fix Import sorting (Ruff error) * Get rid of the dtype convert method as it was implemented all along * Fix the docstring * Fix ruff formatting --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Kashif Rasul authored
-
Patrick von Platen authored
Improve init
-
Pedro Cuenca authored
* Workaround for saving dynamo-wrapped models. * Accept suggestion from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Apply workaround when overriding pipeline components. * Ensure the correct config.json is saved to disk. Instead of the dynamo class. * Save correct module (not compiled one) * Add test * style * fix docstrings * Go back to using string comparisons. PyTorch CPU does not have _dynamo. * Simple test for save_pretrained of compiled models. * Helper function to test whether module is compiled. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
YiYi Xu authored
* add train_controlnet_flax --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
* add: better warning messages when handling multiple conditioning. * fix: handling of controlnet_conditioning_scale
-
Sayak Paul authored
-
- 27 Mar, 2023 5 commits
-
-
Pedro Cuenca authored
* Helper function to disable custom attention processors. * Restore code deleted by mistake. * Format * Fix modeling_text_unet copy.
-
Eugene Lyapustin authored
-
Patrick von Platen authored
-
Pedro Cuenca authored
* Apply same ruff settings as in transformers See https://github.com/huggingface/transformers/blob/main/pyproject.toml Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * Apply new style rules * Style Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * style * remove list, ruff wouldn't auto fix. --------- Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com>
-
Sayak Paul authored
-
- 24 Mar, 2023 7 commits
-
-
Bahjat Kawar authored
* comment update * comment update
-
Sayak Paul authored
* update docs to reflect the updated ckpts. * update: point about prompt. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * emove image resizing. * Apply suggestions from code review * Apply suggestions from code review --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* up * fix more 7 * up * finish
-
PeixuanZuo authored
* update import onnxruntime package, enable onnxruntime-rocm and onnxruntime-training * add ort_nightly_gpu
-
Kashif Rasul authored
* Relax DiT test * relax 2 more tests * fix style * skip test on mac due to older protobuf
-
Bahjat Kawar authored
* TIME first commit * styling. * styling 2. * fixes; tests * apply styling and doc fix. * remove sups. * fixes * remove temp file * move augmentations to const * added doc entry * code quality * customize augmentations * quality * quality --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Haofan Wang authored
-
- 23 Mar, 2023 7 commits
-
-
Sanchit Gandhi authored
* Add AudioLDM * up * add vocoder * start unet * unconditional unet * clap, vocoder and vae * clean-up: conversion scripts * fix: conversion script token_type_ids * clean-up: pipeline docstring * tests: from SD * clean-up: cpu offload vocoder instead of safety checker * feat: adapt tests to audioldm * feat: add docs * clean-up: amend pipeline docstrings * clean-up: make style * clean-up: make fix-copies * fix: add doc path to toctree * clean-up: args for conversion script * clean-up: paths to checkpoints * fix: use conditional unet * clean-up: make style * fix: type hints for UNet * clean-up: docstring for UNet * clean-up: make style * clean-up: remove duplicate in docstring * clean-up: make style * clean-up: make fix-copies * clean-up: move imports to start in code snippet * fix: pass cross_attention_dim as a list/tuple to unet * clean-up: make fix-copies * fix: update checkpoint path * fix: unet cross_attention_dim in tests * film embeddings -> class embeddings * Apply suggestions from code review Co-authored-by:
Will Berman <wlbberman@gmail.com> * fix: unet film embed to use existing args * fix: unet tests to use existing args * fix: make style * fix: transformers import and version in init * clean-up: make style * Revert "clean-up: make style" This reverts commit 5d6d1f8b324f5583e7805dc01e2c86e493660d66. * clean-up: make style * clean-up: use pipeline tester mixin tests where poss * clean-up: skip attn slicing test * fix: add torch dtype to docs * fix: remove conversion script out of src * fix: remove .detach from 1d waveform * fix: reduce default num inf steps * fix: swap height/width -> audio_length_in_s * clean-up: make style * fix: remove nightly tests * fix: imports in conversion script * clean-up: slim-down to two slow tests * clean-up: slim-down fast tests * fix: batch consistent tests * clean-up: make style * clean-up: remove vae slicing fast test * clean-up: propagate changes to doc * fix: increase test tol to 1e-2 * clean-up: finish docs * clean-up: make style * feat: vocoder / VAE compatibility check * feat: possibly expand / cut audio waveform * fix: pipeline call signature test * fix: slow tests output len * clean-up: make style * make style --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com>
-
Steven Liu authored
* add colab notebook and spaces * fix image link
-
YiYi Xu authored
* add contronet flax --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
Pedro Cuenca authored
* Skip mps in text-to-video tests. * style * Skip UNet3D mps tests.
-
Haofan Wang authored
* Update train_text_to_image_lora.py * Update train_text_to_image_lora.py * Update train_text_to_image_lora.py * Update train_text_to_image_lora.py * format
-
Sayak Paul authored
* small fixes to the text to video doc. * add: Spaces link. * add: warning on research-only model.
-
Nipun Jindal authored
[2737]: Add DPMSolverMultistepScheduler to CLIP guided community pipelines Co-authored-by:
njindal <njindal@adobe.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-