img2img.md 29.4 KB
Newer Older
1
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Nathan Lambert's avatar
Nathan Lambert committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# Image-to-image
Patrick von Platen's avatar
Patrick von Platen committed
14

YiYi Xu's avatar
YiYi Xu committed
15
16
[[open-in-colab]]

17
Image-to-image is similar to [text-to-image](conditional_image_generation), but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image.
YiYi Xu's avatar
YiYi Xu committed
18

19
20
21
With 🤗 Diffusers, this is as easy as 1-2-3:

1. Load a checkpoint into the [`AutoPipelineForImage2Image`] class; this pipeline automatically handles loading the correct pipeline class  based on the checkpoint:
YiYi Xu's avatar
YiYi Xu committed
22

23
```py
24
import torch
25
from diffusers import AutoPipelineForImage2Image
26
from diffusers.utils import load_image, make_image_grid
27
28

pipeline = AutoPipelineForImage2Image.from_pretrained(
29
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True
30
)
31
pipeline.enable_model_cpu_offload()
32
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
33
pipeline.enable_xformers_memory_efficient_attention()
YiYi Xu's avatar
YiYi Xu committed
34
35
```

36
37
<Tip>

38
You'll notice throughout the guide, we use [`~DiffusionPipeline.enable_model_cpu_offload`] and [`~DiffusionPipeline.enable_xformers_memory_efficient_attention`], to save memory and increase inference speed. If you're using PyTorch 2.0, then you don't need to call [`~DiffusionPipeline.enable_xformers_memory_efficient_attention`] on your pipeline because it'll already be using PyTorch 2.0's native [scaled-dot product attention](../optimization/torch2.0#scaled-dot-product-attention).
39
40
41
42
43
44
45
46
47
48
49
50
51
52

</Tip>

2. Load an image to pass to the pipeline:

```py
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
```

3. Pass a prompt and image to the pipeline to generate an image:

```py
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipeline(prompt, image=init_image).images[0]
53
make_image_grid([init_image, image], rows=1, cols=2)
54
```
Patrick von Platen's avatar
Patrick von Platen committed
55

56
57
58
59
60
61
62
63
64
65
66
67
68
<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## Popular models

69
The most popular image-to-image models are [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5), [Stable Diffusion XL (SDXL)](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), and [Kandinsky 2.2](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder). The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let's take a quick look at how to use each of these models and compare their results.
70
71
72

### Stable Diffusion v1.5

73
Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you'll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image:
74
75

```py
Patrick von Platen's avatar
Patrick von Platen committed
76
import torch
77
from diffusers import AutoPipelineForImage2Image
78
from diffusers.utils import make_image_grid, load_image
Patrick von Platen's avatar
Patrick von Platen committed
79

80
pipeline = AutoPipelineForImage2Image.from_pretrained(
81
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
82
)
83
pipeline.enable_model_cpu_offload()
84
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
85
86
87
88
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
89
init_image = load_image(url)
90
91
92
93
94

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image).images[0]
95
make_image_grid([init_image, image], rows=1, cols=2)
YiYi Xu's avatar
YiYi Xu committed
96
```
Patrick von Platen's avatar
Patrick von Platen committed
97

98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdv1.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

### Stable Diffusion XL (SDXL)

SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model's output. Read the [SDXL](sdxl) guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images.

```py
import torch
from diffusers import AutoPipelineForImage2Image
116
from diffusers.utils import make_image_grid, load_image
YiYi Xu's avatar
YiYi Xu committed
117

118
119
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
120
)
121
pipeline.enable_model_cpu_offload()
122
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
123
pipeline.enable_xformers_memory_efficient_attention()
Patrick von Platen's avatar
Patrick von Platen committed
124

125
126
# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png"
127
init_image = load_image(url)
128
129
130
131

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
Steven Liu's avatar
Steven Liu committed
132
image = pipeline(prompt, image=init_image, strength=0.5).images[0]
133
make_image_grid([init_image, image], rows=1, cols=2)
YiYi Xu's avatar
YiYi Xu committed
134
135
```

136
137
138
139
140
141
142
143
144
<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
145
</div>
YiYi Xu's avatar
YiYi Xu committed
146

147
148
149
150
151
152
153
154
155
### Kandinsky 2.2

The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images.

The simplest way to use Kandinsky 2.2 is:

```py
import torch
from diffusers import AutoPipelineForImage2Image
156
from diffusers.utils import make_image_grid, load_image
157
158

pipeline = AutoPipelineForImage2Image.from_pretrained(
159
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True
160
)
161
pipeline.enable_model_cpu_offload()
162
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
163
164
165
166
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
167
init_image = load_image(url)
168
169
170
171
172

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image).images[0]
173
make_image_grid([init_image, image], rows=1, cols=2)
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-kandinsky.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## Configure pipeline parameters

There are several important parameters you can configure in the pipeline that'll affect the image generation process and image quality. Let's take a closer look at what these parameters do and how changing them affects the output.

### Strength

`strength` is one of the most important parameters to consider and it'll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words:

- 📈 a higher `strength` value gives the model more "creativity" to generate an image that's different from the initial image; a `strength` value of 1.0 means the initial image is more or less ignored
- 📉 a lower `strength` value means the generated image is more similar to the initial image

198
The `strength` and `num_inference_steps` parameters are related because `strength` determines the number of noise steps to add. For example, if the `num_inference_steps` is 50 and `strength` is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image.
199
200
201
202

```py
import torch
from diffusers import AutoPipelineForImage2Image
203
from diffusers.utils import make_image_grid, load_image
204
205

pipeline = AutoPipelineForImage2Image.from_pretrained(
206
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
207
)
208
pipeline.enable_model_cpu_offload()
209
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
210
211
212
213
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
214
init_image = load_image(url)
215
216
217
218
219

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image, strength=0.8).images[0]
220
make_image_grid([init_image, image], rows=1, cols=2)
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-strength-0.4.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 0.4</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-strength-0.6.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 0.6</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-strength-1.0.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 1.0</figcaption>
  </div>
</div>

### Guidance scale

The `guidance_scale` parameter is used to control how closely aligned the generated image and text prompt are. A higher `guidance_scale` value means your generated image is more aligned with the prompt, while a lower `guidance_scale` value means your generated image has more space to deviate from the prompt.

You can combine `guidance_scale` with `strength` for even more precise control over how expressive the model is. For example, combine a high `strength + guidance_scale` for maximum creativity or use a combination of low `strength` and low `guidance_scale` to generate an image that resembles the initial image but is not as strictly bound to the prompt.

```py
import torch
from diffusers import AutoPipelineForImage2Image
247
from diffusers.utils import make_image_grid, load_image
248
249

pipeline = AutoPipelineForImage2Image.from_pretrained(
250
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
251
)
252
pipeline.enable_model_cpu_offload()
253
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
254
255
256
257
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
258
init_image = load_image(url)
259
260
261
262
263

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0]
264
make_image_grid([init_image, image], rows=1, cols=2)
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-guidance-0.1.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 0.1</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-guidance-3.0.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 5.0</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-guidance-7.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 10.0</figcaption>
  </div>
</div>

### Negative prompt

A negative prompt conditions the model to *not* include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like "poor details" or "blurry" to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image.

```py
import torch
from diffusers import AutoPipelineForImage2Image
289
from diffusers.utils import make_image_grid, load_image
290
291
292

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
293
)
294
pipeline.enable_model_cpu_offload()
295
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
296
297
298
299
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
300
init_image = load_image(url)
301
302
303
304
305
306

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy"

# pass prompt and image to pipeline
image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0]
307
make_image_grid([init_image, image], rows=1, cols=2)
308
309
310
311
312
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-negative-1.png"/>
313
    <figcaption class="mt-2 text-center text-sm text-gray-500">negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy"</figcaption>
314
315
316
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-negative-2.png"/>
317
    <figcaption class="mt-2 text-center text-sm text-gray-500">negative_prompt = "jungle"</figcaption>
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
  </div>
</div>

## Chained image-to-image pipelines

There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines.

### Text-to-image-to-image

Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let's chain a Stable Diffusion and a Kandinsky model.

Start by generating an image with the text-to-image pipeline:

```py
from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image
import torch
334
from diffusers.utils import make_image_grid
335
336

pipeline = AutoPipelineForText2Image.from_pretrained(
337
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
338
)
339
pipeline.enable_model_cpu_offload()
340
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
341
342
pipeline.enable_xformers_memory_efficient_attention()

343
344
text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0]
text2image
345
346
347
348
349
350
```

Now you can pass this generated image to the image-to-image pipeline:

```py
pipeline = AutoPipelineForImage2Image.from_pretrained(
351
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True
352
)
353
pipeline.enable_model_cpu_offload()
354
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
355
356
pipeline.enable_xformers_memory_efficient_attention()

357
358
image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0]
make_image_grid([text2image, image2image], rows=1, cols=2)
359
360
361
362
```

### Image-to-image-to-image

363
You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image.
364
365
366
367
368
369

Start by generating an image:

```py
import torch
from diffusers import AutoPipelineForImage2Image
370
from diffusers.utils import make_image_grid, load_image
371
372

pipeline = AutoPipelineForImage2Image.from_pretrained(
373
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
374
)
375
pipeline.enable_model_cpu_offload()
376
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
377
378
379
380
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
381
init_image = load_image(url)
382
383
384
385
386
387
388

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image, output_type="latent").images[0]
```

YiYi Xu's avatar
YiYi Xu committed
389
<Tip>
Patrick von Platen's avatar
Patrick von Platen committed
390

391
It is important to specify `output_type="latent"` in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE.
Patrick von Platen's avatar
Patrick von Platen committed
392

YiYi Xu's avatar
YiYi Xu committed
393
394
</Tip>

395
396
397
Pass the latent output from this pipeline to the next pipeline to generate an image in a [comic book art style](https://huggingface.co/ogkalu/Comic-Diffusion):

```py
398
399
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "ogkalu/Comic-Diffusion", torch_dtype=torch.float16
400
)
401
pipeline.enable_model_cpu_offload()
402
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
403
404
405
406
407
408
409
410
411
412
pipeline.enable_xformers_memory_efficient_attention()

# need to include the token "charliebo artstyle" in the prompt to use this checkpoint
image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0]
```

Repeat one more time to generate the final image in a [pixel art style](https://huggingface.co/kohbanye/pixel-art-style):

```py
pipeline = AutoPipelineForImage2Image.from_pretrained(
413
    "kohbanye/pixel-art-style", torch_dtype=torch.float16
414
)
415
pipeline.enable_model_cpu_offload()
416
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
417
pipeline.enable_xformers_memory_efficient_attention()
YiYi Xu's avatar
YiYi Xu committed
418

419
420
# need to include the token "pixelartstyle" in the prompt to use this checkpoint
image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0]
421
make_image_grid([init_image, image], rows=1, cols=2)
YiYi Xu's avatar
YiYi Xu committed
422
423
```

424
425
426
427
428
429
430
431
432
### Image-to-upscaler-to-super-resolution

Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image.

Start with an image-to-image pipeline:

```py
import torch
from diffusers import AutoPipelineForImage2Image
433
from diffusers.utils import make_image_grid, load_image
434
435

pipeline = AutoPipelineForImage2Image.from_pretrained(
436
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
437
)
438
pipeline.enable_model_cpu_offload()
439
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
440
441
442
443
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
444
init_image = load_image(url)
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0]
```

<Tip>

It is important to specify `output_type="latent"` in the pipeline to keep all the outputs in *latent* space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE.

</Tip>

Chain it to an upscaler pipeline to increase the image resolution:

```py
461
462
463
from diffusers import StableDiffusionLatentUpscalePipeline

upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(
464
    "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
465
)
466
467
468
469
470
471
472
473
474
upscaler.enable_model_cpu_offload()
upscaler.enable_xformers_memory_efficient_attention()

image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0]
```

Finally, chain it to a super-resolution pipeline to further enhance the resolution:

```py
475
476
477
from diffusers import StableDiffusionUpscalePipeline

super_res = StableDiffusionUpscalePipeline.from_pretrained(
478
    "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
479
)
480
481
482
super_res.enable_model_cpu_offload()
super_res.enable_xformers_memory_efficient_attention()

483
484
image_3 = super_res(prompt, image=image_2).images[0]
make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2)
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
```

## Control image generation

Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the `negative_prompt` to partially control image generation, there are more robust methods like prompt weighting and ControlNets.

### Prompt weighting

Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", you can choose to increase or decrease the embeddings of "astronaut" and "jungle". The [Compel](https://github.com/damian0815/compel) library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the [Prompt weighting](weighted_prompts) guide.

[`AutoPipelineForImage2Image`] has a `prompt_embeds` (and `negative_prompt_embeds` if you're using a negative prompt) parameter where you can pass the embeddings which replaces the `prompt` parameter.

```py
from diffusers import AutoPipelineForImage2Image
import torch

pipeline = AutoPipelineForImage2Image.from_pretrained(
502
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
503
)
504
pipeline.enable_model_cpu_offload()
505
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
506
507
pipeline.enable_xformers_memory_efficient_attention()

508
509
image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel
    negative_prompt_embeds=negative_prompt_embeds, # generated from Compel
510
511
512
513
514
515
516
517
518
519
520
    image=init_image,
).images[0]
```

### ControlNet

ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it.

For example, let's condition an image with a depth map to keep the spatial information in the image.

```py
521
522
from diffusers.utils import load_image, make_image_grid

523
524
# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
525
init_image = load_image(url)
526
527
init_image = init_image.resize((958, 960)) # resize to depth image dimensions
depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png")
528
make_image_grid([init_image, depth_image], rows=1, cols=2)
529
530
531
532
533
534
535
536
537
538
```

Load a ControlNet model conditioned on depth maps and the [`AutoPipelineForImage2Image`]:

```py
from diffusers import ControlNetModel, AutoPipelineForImage2Image
import torch

controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True)
pipeline = AutoPipelineForImage2Image.from_pretrained(
539
    "stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True
540
)
541
pipeline.enable_model_cpu_offload()
542
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
543
544
545
546
547
548
549
pipeline.enable_xformers_memory_efficient_attention()
```

Now generate a new image conditioned on the depth map, initial image, and prompt:

```py
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
550
551
image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0]
make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3)
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">depth image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-controlnet.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">ControlNet image</figcaption>
  </div>
567
</div>
YiYi Xu's avatar
YiYi Xu committed
568

569
570
571
572
573
Let's apply a new [style](https://huggingface.co/nitrosocke/elden-ring-diffusion) to the image generated from the ControlNet by chaining it with an image-to-image pipeline:

```py
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16,
574
)
575
pipeline.enable_model_cpu_offload()
576
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
577
pipeline.enable_xformers_memory_efficient_attention()
YiYi Xu's avatar
YiYi Xu committed
578

579
580
prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt
negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy"
YiYi Xu's avatar
YiYi Xu committed
581

582
583
image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0]
make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2)
YiYi Xu's avatar
YiYi Xu committed
584
585
```

586
<div class="flex justify-center">
587
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-elden-ring.png">
588
589
</div>

590
591
## Optimize

592
Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0's [scaled-dot product attention](../optimization/torch2.0#scaled-dot-product-attention) or [xFormers](../optimization/xformers) (you can use one or the other, but there's no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU.
593
594
595
596
597

```diff
+ pipeline.enable_model_cpu_offload()
+ pipeline.enable_xformers_memory_efficient_attention()
```
598

599
With [`torch.compile`](../optimization/torch2.0#torchcompile), you can boost your inference speed even more by wrapping your UNet with it:
600
601

```py
602
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True)
603
```
Patrick von Platen's avatar
Patrick von Platen committed
604

605
To learn more, take a look at the [Reduce memory usage](../optimization/memory) and [Torch 2.0](../optimization/torch2.0) guides.