overview.mdx 7.3 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
14
15
16
17
18
19
20
# Stable diffusion pipelines

Stable Diffusion is a text-to-image _latent diffusion_ model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs.

Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the [specific pipeline for latent diffusion](pipelines/latent_diffusion) that is part of 🤗 Diffusers.

For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official [launch announcement post](https://stability.ai/blog/stable-diffusion-announcement) and [this section of our own blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work).

21
*Tips*:
22
23
- To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb)

24
*Overview*:
25

26
| Pipeline | Tasks | Colab | Demo
27
|---|---|:---:|:---:|
28
29
30
31
32
33
34
35
| [StableDiffusionPipeline](./text2img) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) | [🤗 Stable Diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)
| [StableDiffusionImg2ImgPipeline](./img2img) | *Image-to-Image Text-Guided Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
| [StableDiffusionInpaintPipeline](./inpaint) | **Experimental** – *Text-Guided Image Inpainting* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
| [StableDiffusionDepth2ImgPipeline](./depth2img) | **Experimental** – *Depth-to-Image Text-Guided Generation * | | Coming soon
| [StableDiffusionImageVariationPipeline](./image_variation) | **Experimental** – *Image Variation Generation * | | [🤗 Stable Diffusion Image Variations](https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations)
| [StableDiffusionUpscalePipeline](./upscale) | **Experimental** – *Text-Guided Image Super-Resolution * | | Coming soon


36

37
38
## Tips

39
40
41
### How to load and use different schedulers.

The stable diffusion pipeline uses [`PNDMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
42
To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
43
44

```python
45
>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
46

47
48
49
50
51
52
>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)

>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler)
53
54
55
```


Kashif Rasul's avatar
Kashif Rasul committed
56
### How to convert all use cases with multiple or single pipeline
57

58
59
60
61
62
63
64
65
66
67
68
If you want to use all possible use cases in a single `DiffusionPipeline` you can either:
- Make use of the [Stable Diffusion Mega Pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community#stable-diffusion-mega) or 
- Make use of the `components` functionality to instantiate all components in the most memory-efficient way:

```python
>>> from diffusers import (
...     StableDiffusionPipeline,
...     StableDiffusionImg2ImgPipeline,
...     StableDiffusionInpaintPipeline,
... )

Patrick von Platen's avatar
Patrick von Platen committed
69
70
71
>>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components)
72

Patrick von Platen's avatar
Patrick von Platen committed
73
>>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline
74
75
```

76
77
78
79
## StableDiffusionPipelineOutput
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput

## StableDiffusionPipeline
80
[[autodoc]] StableDiffusionPipeline
81
	- all
82
83
84
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
85
86
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention
87

88
89


90
## StableDiffusionImg2ImgPipeline
91
[[autodoc]] StableDiffusionImg2ImgPipeline
92
	- all
93
94
95
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
96
97
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention
98

99
## StableDiffusionInpaintPipeline
100
[[autodoc]] StableDiffusionInpaintPipeline
101
	- all
102
103
104
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
105
106
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention
107

108
109
## StableDiffusionDepth2ImgPipeline
[[autodoc]] StableDiffusionDepth2ImgPipeline
110
	- all
111
112
113
114
115
116
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention

117
118
## StableDiffusionImageVariationPipeline
[[autodoc]] StableDiffusionImageVariationPipeline
119
	- all
120
121
122
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
123
124
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention
125
126
127

## StableDiffusionUpscalePipeline
[[autodoc]] StableDiffusionUpscalePipeline
128
	- all
129
130
131
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
132
133
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention