deepfloyd_if.md 18.3 KB
Newer Older
1
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Patrick von Platen's avatar
Patrick von Platen committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# DeepFloyd IF
Patrick von Platen's avatar
Patrick von Platen committed
14

Steven Liu's avatar
Steven Liu committed
15
16
17
18
<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

Patrick von Platen's avatar
Patrick von Platen committed
19
20
## Overview

21
22
DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding.
The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules:
Patrick von Platen's avatar
Patrick von Platen committed
23
- Stage 1: a base model that generates 64x64 px image based on text prompt,
24
- Stage 2: a 64x64 px => 256x256 px super-resolution model, and
Patrick von Platen's avatar
Patrick von Platen committed
25
- Stage 3: a 256x256 px => 1024x1024 px super-resolution model
26
27
28
Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling.
Stage 3 is [Stability AI's x4 Upscaling model](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler).
The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset.
Patrick von Platen's avatar
Patrick von Platen committed
29
30
31
32
33
Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis.

## Usage

Before you can use IF, you need to accept its usage conditions. To do so:
34
1. Make sure to have a [Hugging Face account](https://huggingface.co/join) and be logged in.
apolinário's avatar
apolinário committed
35
2. Accept the license on the model card of [DeepFloyd/IF-I-XL-v1.0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0). Accepting the license on the stage I model card will auto accept for the other IF models.
36
3. Make sure to login locally. Install `huggingface_hub`:
Patrick von Platen's avatar
Patrick von Platen committed
37
38
39
40
```sh
pip install huggingface_hub --upgrade
```

41
run the login function in a Python shell:
Patrick von Platen's avatar
Patrick von Platen committed
42
43
44
45
46
47
48
49
50
51
52
53

```py
from huggingface_hub import login

login()
```

and enter your [Hugging Face Hub access token](https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens).

Next we install `diffusers` and dependencies:

```sh
54
pip install -q diffusers accelerate transformers
Patrick von Platen's avatar
Patrick von Platen committed
55
56
57
58
59
60
61
62
63
64
65
66
67
```

The following sections give more in-detail examples of how to use IF. Specifically:

- [Text-to-Image Generation](#text-to-image-generation)
- [Image-to-Image Generation](#text-guided-image-to-image-generation)
- [Inpainting](#text-guided-inpainting-generation)
- [Reusing model weights](#converting-between-different-pipelines)
- [Speed optimization](#optimizing-for-speed)
- [Memory optimization](#optimizing-for-memory)

**Available checkpoints**
- *Stage-1*
apolinário's avatar
apolinário committed
68
  - [DeepFloyd/IF-I-XL-v1.0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0)
Patrick von Platen's avatar
Patrick von Platen committed
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
  - [DeepFloyd/IF-I-L-v1.0](https://huggingface.co/DeepFloyd/IF-I-L-v1.0)
  - [DeepFloyd/IF-I-M-v1.0](https://huggingface.co/DeepFloyd/IF-I-M-v1.0)

- *Stage-2*
  - [DeepFloyd/IF-II-L-v1.0](https://huggingface.co/DeepFloyd/IF-II-L-v1.0)
  - [DeepFloyd/IF-II-M-v1.0](https://huggingface.co/DeepFloyd/IF-II-M-v1.0)

- *Stage-3*
  - [stabilityai/stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)


**Google Colab**
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/deepfloyd_if_free_tier_google_colab.ipynb)

### Text-to-Image Generation

85
By default diffusers makes use of [model cpu offloading](../../optimization/memory#model-offloading) to run the whole IF pipeline with as little as 14 GB of VRAM.
Patrick von Platen's avatar
Patrick von Platen committed
86
87
88

```python
from diffusers import DiffusionPipeline
89
from diffusers.utils import pt_to_pil, make_image_grid
Patrick von Platen's avatar
Patrick von Platen committed
90
91
92
import torch

# stage 1
apolinário's avatar
apolinário committed
93
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = DiffusionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {
    "feature_extractor": stage_1.feature_extractor,
    "safety_checker": stage_1.safety_checker,
    "watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()

prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
generator = torch.manual_seed(1)

# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)

# stage 1
120
stage_1_output = stage_1(
Patrick von Platen's avatar
Patrick von Platen committed
121
122
    prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt"
).images
123
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")
Patrick von Platen's avatar
Patrick von Platen committed
124
125

# stage 2
126
127
stage_2_output = stage_2(
    image=stage_1_output,
Patrick von Platen's avatar
Patrick von Platen committed
128
129
130
131
132
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
133
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")
Patrick von Platen's avatar
Patrick von Platen committed
134
135

# stage 3
136
137
138
stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images
#stage_3_output[0].save("./if_stage_III.png")
make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3)
Patrick von Platen's avatar
Patrick von Platen committed
139
140
141
142
143
```

### Text Guided Image-to-Image Generation

The same IF model weights can be used for text-guided image-to-image translation or image variation.
144
In this case just make sure to load the weights using the [`IFImg2ImgPipeline`] and [`IFImg2ImgSuperResolutionPipeline`] pipelines.
Patrick von Platen's avatar
Patrick von Platen committed
145
146

**Note**: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
147
without loading them twice by making use of the [`~DiffusionPipeline.components`] argument as explained [here](#converting-between-different-pipelines).
Patrick von Platen's avatar
Patrick von Platen committed
148
149
150

```python
from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline
151
from diffusers.utils import pt_to_pil, load_image, make_image_grid
Patrick von Platen's avatar
Patrick von Platen committed
152
153
154
155
import torch

# download image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
156
original_image = load_image(url)
Patrick von Platen's avatar
Patrick von Platen committed
157
158
159
original_image = original_image.resize((768, 512))

# stage 1
apolinário's avatar
apolinário committed
160
stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {
    "feature_extractor": stage_1.feature_extractor,
    "safety_checker": stage_1.safety_checker,
    "watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()

prompt = "A fantasy landscape in style minecraft"
generator = torch.manual_seed(1)

# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)

# stage 1
187
stage_1_output = stage_1(
Patrick von Platen's avatar
Patrick von Platen committed
188
189
190
191
192
193
    image=original_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
194
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")
Patrick von Platen's avatar
Patrick von Platen committed
195
196

# stage 2
197
198
stage_2_output = stage_2(
    image=stage_1_output,
Patrick von Platen's avatar
Patrick von Platen committed
199
200
201
202
203
204
    original_image=original_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
205
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")
Patrick von Platen's avatar
Patrick von Platen committed
206
207

# stage 3
208
209
210
stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images
#stage_3_output[0].save("./if_stage_III.png")
make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4)
Patrick von Platen's avatar
Patrick von Platen committed
211
212
213
214
215
216
217
218
219
220
221
222
```

### Text Guided Inpainting Generation

The same IF model weights can be used for text-guided image-to-image translation or image variation.
In this case just make sure to load the weights using the [`IFInpaintingPipeline`] and [`IFInpaintingSuperResolutionPipeline`] pipelines.

**Note**: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
without loading them twice by making use of the [`~DiffusionPipeline.components()`] function as explained [here](#converting-between-different-pipelines).

```python
from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline
223
from diffusers.utils import pt_to_pil, load_image, make_image_grid
Patrick von Platen's avatar
Patrick von Platen committed
224
225
226
227
import torch

# download image
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png"
228
original_image = load_image(url)
Patrick von Platen's avatar
Patrick von Platen committed
229
230
231

# download mask
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png"
232
mask_image = load_image(url)
Patrick von Platen's avatar
Patrick von Platen committed
233
234

# stage 1
apolinário's avatar
apolinário committed
235
stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {
    "feature_extractor": stage_1.feature_extractor,
    "safety_checker": stage_1.safety_checker,
    "watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()

prompt = "blue sunglasses"
generator = torch.manual_seed(1)

# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)

# stage 1
262
stage_1_output = stage_1(
Patrick von Platen's avatar
Patrick von Platen committed
263
264
265
266
267
268
269
    image=original_image,
    mask_image=mask_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
270
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")
Patrick von Platen's avatar
Patrick von Platen committed
271
272

# stage 2
273
274
stage_2_output = stage_2(
    image=stage_1_output,
Patrick von Platen's avatar
Patrick von Platen committed
275
276
277
278
279
280
281
    original_image=original_image,
    mask_image=mask_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
282
#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png")
Patrick von Platen's avatar
Patrick von Platen committed
283
284

# stage 3
285
286
287
stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images
#stage_3_output[0].save("./if_stage_III.png")
make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5)
Patrick von Platen's avatar
Patrick von Platen committed
288
289
290
291
292
293
294
295
296
```

### Converting between different pipelines

In addition to being loaded with `from_pretrained`, Pipelines can also be loaded directly from each other.

```python
from diffusers import IFPipeline, IFSuperResolutionPipeline

apolinário's avatar
apolinário committed
297
pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0")
Patrick von Platen's avatar
Patrick von Platen committed
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0")


from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline

pipe_1 = IFImg2ImgPipeline(**pipe_1.components)
pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components)


from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline

pipe_1 = IFInpaintingPipeline(**pipe_1.components)
pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components)
```

### Optimizing for speed

The simplest optimization to run IF faster is to move all model components to the GPU.

```py
apolinário's avatar
apolinário committed
318
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
319
320
321
322
323
pipe.to("cuda")
```

You can also run the diffusion process for a shorter number of timesteps.

324
This can either be done with the `num_inference_steps` argument:
Patrick von Platen's avatar
Patrick von Platen committed
325
326
327
328
329

```py
pipe("<prompt>", num_inference_steps=30)
```

330
Or with the `timesteps` argument:
Patrick von Platen's avatar
Patrick von Platen committed
331
332
333
334
335
336
337
338

```py
from diffusers.pipelines.deepfloyd_if import fast27_timesteps

pipe("<prompt>", timesteps=fast27_timesteps)
```

When doing image variation or inpainting, you can also decrease the number of timesteps
339
with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process.
Patrick von Platen's avatar
Patrick von Platen committed
340
341
342
A smaller number will vary the image less but run faster.

```py
apolinário's avatar
apolinário committed
343
pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
344
345
346
347
348
349
350
351
352
pipe.to("cuda")

image = pipe(image=image, prompt="<prompt>", strength=0.3).images
```

You can also use [`torch.compile`](../../optimization/torch2.0). Note that we have not exhaustively tested `torch.compile`
with IF and it might not give expected results.

```py
353
from diffusers import DiffusionPipeline
Patrick von Platen's avatar
Patrick von Platen committed
354
355
import torch

apolinário's avatar
apolinário committed
356
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
357
358
pipe.to("cuda")

359
360
pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True)
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
Patrick von Platen's avatar
Patrick von Platen committed
361
362
363
364
```

### Optimizing for memory

365
When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs.
Patrick von Platen's avatar
Patrick von Platen committed
366
367
368
369

Either the model based CPU offloading,

```py
apolinário's avatar
apolinário committed
370
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
371
372
373
374
375
376
pipe.enable_model_cpu_offload()
```

or the more aggressive layer based CPU offloading.

```py
apolinário's avatar
apolinário committed
377
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
Patrick von Platen's avatar
Patrick von Platen committed
378
379
380
381
382
383
384
385
386
pipe.enable_sequential_cpu_offload()
```

Additionally, T5 can be loaded in 8bit precision

```py
from transformers import T5EncoderModel

text_encoder = T5EncoderModel.from_pretrained(
apolinário's avatar
apolinário committed
387
    "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit"
Patrick von Platen's avatar
Patrick von Platen committed
388
389
390
391
392
)

from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
apolinário's avatar
apolinário committed
393
    "DeepFloyd/IF-I-XL-v1.0",
Patrick von Platen's avatar
Patrick von Platen committed
394
395
396
397
398
399
400
401
    text_encoder=text_encoder,  # pass the previously instantiated 8bit text encoder
    unet=None,
    device_map="auto",
)

prompt_embeds, negative_embeds = pipe.encode_prompt("<prompt>")
```

402
403
For CPU RAM constrained machines like Google Colab free tier where we can't load all model components to the CPU at once, we can manually only load the pipeline with
the text encoder or UNet when the respective model components are needed.
Patrick von Platen's avatar
Patrick von Platen committed
404
405
406
407
408
409

```py
from diffusers import IFPipeline, IFSuperResolutionPipeline
import torch
import gc
from transformers import T5EncoderModel
410
from diffusers.utils import pt_to_pil, make_image_grid
Patrick von Platen's avatar
Patrick von Platen committed
411
412

text_encoder = T5EncoderModel.from_pretrained(
apolinário's avatar
apolinário committed
413
    "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit"
Patrick von Platen's avatar
Patrick von Platen committed
414
415
416
417
)

# text to image
pipe = DiffusionPipeline.from_pretrained(
apolinário's avatar
apolinário committed
418
    "DeepFloyd/IF-I-XL-v1.0",
Patrick von Platen's avatar
Patrick von Platen committed
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
    text_encoder=text_encoder,  # pass the previously instantiated 8bit text encoder
    unet=None,
    device_map="auto",
)

prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

# Remove the pipeline so we can re-load the pipeline with the unet
del text_encoder
del pipe
gc.collect()
torch.cuda.empty_cache()

pipe = IFPipeline.from_pretrained(
apolinário's avatar
apolinário committed
434
    "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto"
Patrick von Platen's avatar
Patrick von Platen committed
435
436
437
)

generator = torch.Generator().manual_seed(0)
438
stage_1_output = pipe(
Patrick von Platen's avatar
Patrick von Platen committed
439
440
441
442
443
444
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    output_type="pt",
    generator=generator,
).images

445
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")
Patrick von Platen's avatar
Patrick von Platen committed
446
447
448
449
450
451
452
453
454
455
456
457
458

# Remove the pipeline so we can load the super-resolution pipeline
del pipe
gc.collect()
torch.cuda.empty_cache()

# First super resolution

pipe = IFSuperResolutionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto"
)

generator = torch.Generator().manual_seed(0)
459
460
stage_2_output = pipe(
    image=stage_1_output,
Patrick von Platen's avatar
Patrick von Platen committed
461
462
463
464
465
466
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    output_type="pt",
    generator=generator,
).images

467
468
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")
make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2)
Patrick von Platen's avatar
Patrick von Platen committed
469
470
471
472
473
474
475
```

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_if.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py) | *Text-to-Image Generation* | - |
476
| [pipeline_if_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py) | *Text-to-Image Generation* | - |
Patrick von Platen's avatar
Patrick von Platen committed
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
| [pipeline_if_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py) | *Image-to-Image Generation* | - |
| [pipeline_if_img2img_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py) | *Image-to-Image Generation* | - |
| [pipeline_if_inpainting.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py) | *Image-to-Image Generation* | - |
| [pipeline_if_inpainting_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py) | *Image-to-Image Generation* | - |

## IFPipeline
[[autodoc]] IFPipeline
	- all
	- __call__

## IFSuperResolutionPipeline
[[autodoc]] IFSuperResolutionPipeline
	- all
	- __call__

## IFImg2ImgPipeline
[[autodoc]] IFImg2ImgPipeline
	- all
	- __call__

## IFImg2ImgSuperResolutionPipeline
[[autodoc]] IFImg2ImgSuperResolutionPipeline
	- all
	- __call__

## IFInpaintingPipeline
[[autodoc]] IFInpaintingPipeline
	- all
	- __call__

## IFInpaintingSuperResolutionPipeline
[[autodoc]] IFInpaintingSuperResolutionPipeline
	- all
	- __call__