- 12 Apr, 2023 4 commits
-
-
Will Berman authored
* fix pipeline __setattr__ * add test --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* add use_memory_efficient params placeholder * test * add memory efficient attention jax * add memory efficient attention jax * newline * forgot dot * Rename use_memory_efficient * Keep dtype last. * Actually use key_chunk_size * Rename symbol * Apply style * Rename use_memory_efficient * Keep dtype last * Pass `use_memory_efficient_attention` in `from_pretrained` * Move JAX memory efficient attention to attention_flax. * Simple test. * style --------- Co-authored-by:
muhammad_hanif <muhammad_hanif@sofcograha.co.id> Co-authored-by:
MuhHanif <48muhhanif@gmail.com>
-
Susung Hong authored
* Update index.mdx * Edit docs & add HF space link * Only change equation numbers in comments
-
Sayak Paul authored
* add: first draft for a better LoRA enabler. * make fix-copies. * feat: backward compatibility. * add: entry to the docs. * add: tests. * fix: docs. * fix: norm group test for UNet3D. * feat: add support for flat dicts. * add depcrcation message instead of warning.
-
- 11 Apr, 2023 7 commits
-
-
Will Berman authored
add AttnAddedKVProcessor2_0 block
-
Will Berman authored
add group norm type to attention processor cross attention norm This lets the cross attention norm use both a group norm block and a layer norm block. The group norm operates along the channels dimension and requires input shape (batch size, channels, *) where as the layer norm with a single `normalized_shape` dimension only operates over the least significant dimension i.e. (*, channels). The channels we want to normalize are the hidden dimension of the encoder hidden states. By convention, the encoder hidden states are always passed as (batch size, sequence length, hidden states). This means the layer norm can operate on the tensor without modification, but the group norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length). All existing attention processors will have the same logic and we can consolidate it in a helper function `prepare_encoder_hidden_states` prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten move norm_cross defined check to outside norm_encoder_hidden_states add missing attn.norm_cross check
-
Will Berman authored
* unet time embedding activation function * typo act_fn -> time_embedding_act_fn * flatten conditional
-
Will Berman authored
* add only cross attention to simple attention blocks * add test for only_cross_attention re: @patrickvonplaten * mid_block_only_cross_attention better default allow mid_block_only_cross_attention to default to `only_cross_attention` when `only_cross_attention` is given as a single boolean
-
Pedro Cuenca authored
When doing generation manually and using guidance_scale as a static argument.
-
Will Berman authored
-
Patrick von Platen authored
* [Config] Fix config prints and save, load * Only use potential nn.Modules for dtype and device * Correct vae image processor * make sure in_channels is not accessed directly * make sure in channels is only accessed via config * Make sure schedulers only access config attributes * Make sure to access config in SAG * Fix vae processor and make style * add tests * uP * make style * Fix more naming issues * Final fix with vae config * change more
-
- 10 Apr, 2023 5 commits
-
-
Andranik Movsisyan authored
* add TextToVideoZeroPipeline and CrossFrameAttnProcessor * add docs for text-to-video zero * add teaser image for text-to-video zero docs * Fix review changes. Add Documentation. Add test * clean up the codes in pipeline_text_to_video.py. Add descriptive comments and docstrings * make style && make quality * make fix-copies * make requested changes to docs. use huggingface server links for resources, delete res folder * make style && make quality && make fix-copies * make style && make quality * Apply suggestions from code review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
William Berman authored
`encoder_hid_dim` provides an additional projection for the input `encoder_hidden_states` from `encoder_hidden_dim` to `cross_attention_dim`
-
William Berman authored
-
William Berman authored
-
William Berman authored
-
- 06 Apr, 2023 1 commit
-
-
cmdr2 authored
Update the K-Diffusion SD pipeline, to allow calling it with only prompt_embeds (instead of always requiring a prompt) (#2962)
-
- 05 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* [Pipeline download] Improve pipeline download for index and passed components * correct * add more tests * up
-
- 04 Apr, 2023 1 commit
-
-
YiYi Xu authored
Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 31 Mar, 2023 8 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Nipun Jindal authored
* [2884]: Fix cross_attention_kwargs in StableDiffusionImg2ImgPipeline * [Build Fix] * [Build Fix] --------- Co-authored-by:njindal <njindal@adobe.com>
-
Sandeep authored
* Remove suggestion to use cuDNN benchmark in docs * removing the wrong line * add support for embeds * fix line length
-
Takuma Mori authored
* add use_karras_sigmas option thanks @Stax124 * fix sigma_min/max from scheduler.sigmas * add docstring * revert to use k_diffusion_model.sigma, to(device) * add integration test * make style
-
- 30 Mar, 2023 1 commit
-
-
Pi Esposito authored
* add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add case for parsing embedding from auto1111 UI format Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> * fix style after rebase * move textual inversion mixin to loaders * move mixin inheritance to DiffusionPipeline from StableDiffusionPipeline) * update dummy class name * addressed allo comments * fix old dangling import * fix style * proposal * remove bogus * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> * finish * make style * up * fix code quality * fix code quality - again * fix code quality - 3 * fix alt diffusion code quality * fix model editing pipeline * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Finish --------- Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 28 Mar, 2023 7 commits
-
-
dg845 authored
Add warning in __init__ if user loads a checkpoint with pipeline.unet.config.in_channels other than 9.
-
cmdr2 authored
Update the legacy inpainting SD pipeline, to allow calling it with only prompt_embeds (instead of always requiring a prompt) (#2842) Fix error 'required positional argument: prompt' when Legacy Inpaint is called only with prompt_embeds
-
Li-Huai (Allan) Lin authored
* Remove duplicate sentence * format
-
junhsss authored
-
Stax124 authored
* Allow user to disable SafetyChecker and enable dtypes if loading models from .ckpt or .safetensors * Fix Import sorting (Ruff error) * Get rid of the dtype convert method as it was implemented all along * Fix the docstring * Fix ruff formatting --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* Workaround for saving dynamo-wrapped models. * Accept suggestion from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Apply workaround when overriding pipeline components. * Ensure the correct config.json is saved to disk. Instead of the dynamo class. * Save correct module (not compiled one) * Add test * style * fix docstrings * Go back to using string comparisons. PyTorch CPU does not have _dynamo. * Simple test for save_pretrained of compiled models. * Helper function to test whether module is compiled. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
* add: better warning messages when handling multiple conditioning. * fix: handling of controlnet_conditioning_scale
-
- 27 Mar, 2023 3 commits
-
-
Pedro Cuenca authored
* Helper function to disable custom attention processors. * Restore code deleted by mistake. * Format * Fix modeling_text_unet copy.
-
Eugene Lyapustin authored
-
Pedro Cuenca authored
* Apply same ruff settings as in transformers See https://github.com/huggingface/transformers/blob/main/pyproject.toml Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * Apply new style rules * Style Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * style * remove list, ruff wouldn't auto fix. --------- Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com>
-
- 24 Mar, 2023 2 commits
-
-
Bahjat Kawar authored
* comment update * comment update
-
Patrick von Platen authored
* up * fix more 7 * up * finish
-