Unverified Commit b3e5cd6b authored by Patrick von Platen's avatar Patrick von Platen Committed by GitHub
Browse files

[Kandinsky] Add combined pipelines / Fix cpu model offload / Fix inpainting (#4207)



* Add combined pipeline

* Download readme

* Upload

* up

* up

* fix final

* Add enable model cpu offload kandinsky

* finish

* finish

* Fix

* fix more

* make style

* fix kandinsky mask

* fix inpainting test

* add callbacks

* add tests

* fix tests

* Apply suggestions from code review
Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>

* docs

* docs

* correct docs

* fix tests

* add warning

* correct docs

---------
Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
parent b37dc3b3
...@@ -206,6 +206,8 @@ ...@@ -206,6 +206,8 @@
title: InstructPix2Pix title: InstructPix2Pix
- local: api/pipelines/kandinsky - local: api/pipelines/kandinsky
title: Kandinsky title: Kandinsky
- local: api/pipelines/kandinsky_v22
title: Kandinsky 2.2
- local: api/pipelines/latent_diffusion - local: api/pipelines/latent_diffusion
title: Latent Diffusion title: Latent Diffusion
- local: api/pipelines/panorama - local: api/pipelines/panorama
......
...@@ -212,9 +212,9 @@ init_image = load_image( ...@@ -212,9 +212,9 @@ init_image = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
) )
mask = np.ones((768, 768), dtype=np.float32) mask = np.zeros((768, 768), dtype=np.float32)
# Let's mask out an area above the cat's head # Let's mask out an area above the cat's head
mask[:250, 250:-250] = 0 mask[:250, 250:-250] = 1
out = pipe( out = pipe(
prompt, prompt,
...@@ -276,208 +276,6 @@ image.save("starry_cat.png") ...@@ -276,208 +276,6 @@ image.save("starry_cat.png")
``` ```
![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/starry_cat.png) ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/starry_cat.png)
### Text-to-Image Generation with ControlNet Conditioning
In the following, we give a simple example of how to use [`KandinskyV22ControlnetPipeline`] to add control to the text-to-image generation with a depth image.
First, let's take an image and extract its depth map.
```python
from diffusers.utils import load_image
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"
).resize((768, 768))
```
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png)
We can use the `depth-estimation` pipeline from transformers to process the image and retrieve its depth map.
```python
import torch
import numpy as np
from transformers import pipeline
from diffusers.utils import load_image
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
```
Now, we load the prior pipeline and the text-to-image controlnet pipeline
```python
from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline
pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior = pipe_prior.to("cuda")
pipe = KandinskyV22ControlnetPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
```
We pass the prompt and negative prompt through the prior to generate image embeddings
```python
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
generator = torch.Generator(device="cuda").manual_seed(43)
image_emb, zero_image_emb = pipe_prior(
prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator
).to_tuple()
```
Now we can pass the image embeddings and the depth image we extracted to the controlnet pipeline. With Kandinsky 2.2, only prior pipelines accept `prompt` input. You do not need to pass the prompt to the controlnet pipeline.
```python
images = pipe(
image_embeds=image_emb,
negative_image_embeds=zero_image_emb,
hint=hint,
num_inference_steps=50,
generator=generator,
height=768,
width=768,
).images
images[0].save("robot_cat.png")
```
The output image looks as follow:
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat_text2img.png)
### Image-to-Image Generation with ControlNet Conditioning
Kandinsky 2.2 also includes a [`KandinskyV22ControlnetImg2ImgPipeline`] that will allow you to add control to the image generation process with both the image and its depth map. This pipeline works really well with [`KandinskyV22PriorEmb2EmbPipeline`], which generates image embeddings based on both a text prompt and an image.
For our robot cat example, we will pass the prompt and cat image together to the prior pipeline to generate an image embedding. We will then use that image embedding and the depth map of the cat to further control the image generation process.
We can use the same cat image and its depth map from the last example.
```python
import torch
import numpy as np
from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline
from diffusers.utils import load_image
from transformers import pipeline
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinskyv22/cat.png"
).resize((768, 768))
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
pipe_prior = KandinskyV22PriorEmb2EmbPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior = pipe_prior.to("cuda")
pipe = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
generator = torch.Generator(device="cuda").manual_seed(43)
# run prior pipeline
img_emb = pipe_prior(prompt=prompt, image=img, strength=0.85, generator=generator)
negative_emb = pipe_prior(prompt=negative_prior_prompt, image=img, strength=1, generator=generator)
# run controlnet img2img pipeline
images = pipe(
image=img,
strength=0.5,
image_embeds=img_emb.image_embeds,
negative_image_embeds=negative_emb.image_embeds,
hint=hint,
num_inference_steps=50,
generator=generator,
height=768,
width=768,
).images
images[0].save("robot_cat.png")
```
Here is the output. Compared with the output from our text-to-image controlnet example, it kept a lot more cat facial details from the original image and worked into the robot style we asked for.
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat.png)
## Kandinsky 2.2
The Kandinsky 2.2 release includes robust new text-to-image models that support text-to-image generation, image-to-image generation, image interpolation, and text-guided image inpainting. The general workflow to perform these tasks using Kandinsky 2.2 is the same as in Kandinsky 2.1. First, you will need to use a prior pipeline to generate image embeddings based on your text prompt, and then use one of the image decoding pipelines to generate the output image. The only difference is that in Kandinsky 2.2, all of the decoding pipelines no longer accept the `prompt` input, and the image generation process is conditioned with only `image_embeds` and `negative_image_embeds`.
Let's look at an example of how to perform text-to-image generation using Kandinsky 2.2.
First, let's create the prior pipeline and text-to-image pipeline with Kandinsky 2.2 checkpoints.
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16)
pipe_prior.to("cuda")
t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
t2i_pipe.to("cuda")
```
You can then use `pipe_prior` to generate image embeddings.
```python
prompt = "portrait of a women, blue eyes, cinematic"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple()
```
Now you can pass these embeddings to the text-to-image pipeline. When using Kandinsky 2.2 you don't need to pass the `prompt` (but you do with the previous version, Kandinsky 2.1).
```
image = t2i_pipe(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[
0
]
image.save("portrait.png")
```
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/%20blue%20eyes.png)
We used the text-to-image pipeline as an example, but the same process applies to all decoding pipelines in Kandinsky 2.2. For more information, please refer to our API section for each pipeline.
## Optimization ## Optimization
Running Kandinsky in inference requires running both a first prior pipeline: [`KandinskyPriorPipeline`] Running Kandinsky in inference requires running both a first prior pipeline: [`KandinskyPriorPipeline`]
...@@ -530,85 +328,63 @@ t2i_pipe.unet = torch.compile(t2i_pipe.unet, mode="reduce-overhead", fullgraph=T ...@@ -530,85 +328,63 @@ t2i_pipe.unet = torch.compile(t2i_pipe.unet, mode="reduce-overhead", fullgraph=T
After compilation you should see a very fast inference time. For more information, After compilation you should see a very fast inference time. For more information,
feel free to have a look at [Our PyTorch 2.0 benchmark](https://huggingface.co/docs/diffusers/main/en/optimization/torch2.0). feel free to have a look at [Our PyTorch 2.0 benchmark](https://huggingface.co/docs/diffusers/main/en/optimization/torch2.0).
<Tip>
To generate images directly from a single pipeline, you can use [`KandinskyCombinedPipeline`], [`KandinskyImg2ImgCombinedPipeline`], [`KandinskyInpaintCombinedPipeline`].
These combined pipelines wrap the [`KandinskyPriorPipeline`] and [`KandinskyPipeline`], [`KandinskyImg2ImgPipeline`], [`KandinskyInpaintPipeline`] respectively into a single
pipeline for a simpler user experience
</Tip>
## Available Pipelines: ## Available Pipelines:
| Pipeline | Tasks | | Pipeline | Tasks |
|---|---| |---|---|
| [pipeline_kandinsky2_2.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py) | *Text-to-Image Generation* |
| [pipeline_kandinsky.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py) | *Text-to-Image Generation* | | [pipeline_kandinsky.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py) | *Text-to-Image Generation* |
| [pipeline_kandinsky2_2_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpaint.py) | *Image-Guided Image Generation* | | [pipeline_kandinsky_combined.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky_combined.py) | *End-to-end Text-to-Image, image-to-image, Inpainting Generation* |
| [pipeline_kandinsky_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py) | *Image-Guided Image Generation* | | [pipeline_kandinsky_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py) | *Image-Guided Image Generation* |
| [pipeline_kandinsky2_2_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py) | *Image-Guided Image Generation* |
| [pipeline_kandinsky_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py) | *Image-Guided Image Generation* | | [pipeline_kandinsky_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py) | *Image-Guided Image Generation* |
| [pipeline_kandinsky2_2_controlnet.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py) | *Image-Guided Image Generation* |
| [pipeline_kandinsky2_2_controlnet_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py) | *Image-Guided Image Generation* |
### KandinskyV22Pipeline ### KandinskyPriorPipeline
[[autodoc]] KandinskyV22Pipeline
- all
- __call__
### KandinskyV22ControlnetPipeline
[[autodoc]] KandinskyV22ControlnetPipeline
- all
- __call__
### KandinskyV22ControlnetImg2ImgPipeline
[[autodoc]] KandinskyV22ControlnetImg2ImgPipeline
- all
- __call__
### KandinskyV22Img2ImgPipeline
[[autodoc]] KandinskyV22Img2ImgPipeline
- all
- __call__
### KandinskyV22InpaintPipeline
[[autodoc]] KandinskyV22InpaintPipeline [[autodoc]] KandinskyPriorPipeline
- all - all
- __call__ - __call__
- interpolate
### KandinskyV22PriorPipeline ### KandinskyPipeline
[[autodoc]] ## KandinskyV22PriorPipeline [[autodoc]] KandinskyPipeline
- all - all
- __call__ - __call__
- interpolate
### KandinskyV22PriorEmb2EmbPipeline ### KandinskyImg2ImgPipeline
[[autodoc]] KandinskyV22PriorEmb2EmbPipeline [[autodoc]] KandinskyImg2ImgPipeline
- all - all
- __call__ - __call__
- interpolate
### KandinskyPriorPipeline ### KandinskyInpaintPipeline
[[autodoc]] KandinskyPriorPipeline [[autodoc]] KandinskyInpaintPipeline
- all - all
- __call__ - __call__
- interpolate
### KandinskyPipeline ### KandinskyCombinedPipeline
[[autodoc]] KandinskyPipeline [[autodoc]] KandinskyCombinedPipeline
- all - all
- __call__ - __call__
### KandinskyImg2ImgPipeline ### KandinskyImg2ImgCombinedPipeline
[[autodoc]] KandinskyImg2ImgPipeline [[autodoc]] KandinskyImg2ImgCombinedPipeline
- all - all
- __call__ - __call__
### KandinskyInpaintPipeline ### KandinskyInpaintCombinedPipeline
[[autodoc]] KandinskyInpaintPipeline [[autodoc]] KandinskyInpaintCombinedPipeline
- all - all
- __call__ - __call__
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Kandinsky 2.2
The Kandinsky 2.2 release includes robust new text-to-image models that support text-to-image generation, image-to-image generation, image interpolation, and text-guided image inpainting. The general workflow to perform these tasks using Kandinsky 2.2 is the same as in Kandinsky 2.1. First, you will need to use a prior pipeline to generate image embeddings based on your text prompt, and then use one of the image decoding pipelines to generate the output image. The only difference is that in Kandinsky 2.2, all of the decoding pipelines no longer accept the `prompt` input, and the image generation process is conditioned with only `image_embeds` and `negative_image_embeds`.
Let's look at an example of how to perform text-to-image generation using Kandinsky 2.2.
First, let's create the prior pipeline and text-to-image pipeline with Kandinsky 2.2 checkpoints.
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16)
pipe_prior.to("cuda")
t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
t2i_pipe.to("cuda")
```
You can then use `pipe_prior` to generate image embeddings.
```python
prompt = "portrait of a women, blue eyes, cinematic"
negative_prompt = "low quality, bad quality"
image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple()
```
Now you can pass these embeddings to the text-to-image pipeline. When using Kandinsky 2.2 you don't need to pass the `prompt` (but you do with the previous version, Kandinsky 2.1).
```
image = t2i_pipe(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[
0
]
image.save("portrait.png")
```
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/%20blue%20eyes.png)
We used the text-to-image pipeline as an example, but the same process applies to all decoding pipelines in Kandinsky 2.2. For more information, please refer to our API section for each pipeline.
### Text-to-Image Generation with ControlNet Conditioning
In the following, we give a simple example of how to use [`KandinskyV22ControlnetPipeline`] to add control to the text-to-image generation with a depth image.
First, let's take an image and extract its depth map.
```python
from diffusers.utils import load_image
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"
).resize((768, 768))
```
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png)
We can use the `depth-estimation` pipeline from transformers to process the image and retrieve its depth map.
```python
import torch
import numpy as np
from transformers import pipeline
from diffusers.utils import load_image
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
```
Now, we load the prior pipeline and the text-to-image controlnet pipeline
```python
from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline
pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior = pipe_prior.to("cuda")
pipe = KandinskyV22ControlnetPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
```
We pass the prompt and negative prompt through the prior to generate image embeddings
```python
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
generator = torch.Generator(device="cuda").manual_seed(43)
image_emb, zero_image_emb = pipe_prior(
prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator
).to_tuple()
```
Now we can pass the image embeddings and the depth image we extracted to the controlnet pipeline. With Kandinsky 2.2, only prior pipelines accept `prompt` input. You do not need to pass the prompt to the controlnet pipeline.
```python
images = pipe(
image_embeds=image_emb,
negative_image_embeds=zero_image_emb,
hint=hint,
num_inference_steps=50,
generator=generator,
height=768,
width=768,
).images
images[0].save("robot_cat.png")
```
The output image looks as follow:
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat_text2img.png)
### Image-to-Image Generation with ControlNet Conditioning
Kandinsky 2.2 also includes a [`KandinskyV22ControlnetImg2ImgPipeline`] that will allow you to add control to the image generation process with both the image and its depth map. This pipeline works really well with [`KandinskyV22PriorEmb2EmbPipeline`], which generates image embeddings based on both a text prompt and an image.
For our robot cat example, we will pass the prompt and cat image together to the prior pipeline to generate an image embedding. We will then use that image embedding and the depth map of the cat to further control the image generation process.
We can use the same cat image and its depth map from the last example.
```python
import torch
import numpy as np
from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline
from diffusers.utils import load_image
from transformers import pipeline
img = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinskyv22/cat.png"
).resize((768, 768))
def make_hint(image, depth_estimator):
image = depth_estimator(image)["depth"]
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
detected_map = torch.from_numpy(image).float() / 255.0
hint = detected_map.permute(2, 0, 1)
return hint
depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
pipe_prior = KandinskyV22PriorEmb2EmbPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior = pipe_prior.to("cuda")
pipe = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
generator = torch.Generator(device="cuda").manual_seed(43)
# run prior pipeline
img_emb = pipe_prior(prompt=prompt, image=img, strength=0.85, generator=generator)
negative_emb = pipe_prior(prompt=negative_prior_prompt, image=img, strength=1, generator=generator)
# run controlnet img2img pipeline
images = pipe(
image=img,
strength=0.5,
image_embeds=img_emb.image_embeds,
negative_image_embeds=negative_emb.image_embeds,
hint=hint,
num_inference_steps=50,
generator=generator,
height=768,
width=768,
).images
images[0].save("robot_cat.png")
```
Here is the output. Compared with the output from our text-to-image controlnet example, it kept a lot more cat facial details from the original image and worked into the robot style we asked for.
![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat.png)
## Optimization
Running Kandinsky in inference requires running both a first prior pipeline: [`KandinskyPriorPipeline`]
and a second image decoding pipeline which is one of [`KandinskyPipeline`], [`KandinskyImg2ImgPipeline`], or [`KandinskyInpaintPipeline`].
The bulk of the computation time will always be the second image decoding pipeline, so when looking
into optimizing the model, one should look into the second image decoding pipeline.
When running with PyTorch < 2.0, we strongly recommend making use of [`xformers`](https://github.com/facebookresearch/xformers)
to speed-up the optimization. This can be done by simply running:
```py
from diffusers import DiffusionPipeline
import torch
t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
t2i_pipe.enable_xformers_memory_efficient_attention()
```
When running on PyTorch >= 2.0, PyTorch's SDPA attention will automatically be used. For more information on
PyTorch's SDPA, feel free to have a look at [this blog post](https://pytorch.org/blog/accelerated-diffusers-pt-20/).
To have explicit control , you can also manually set the pipeline to use PyTorch's 2.0 efficient attention:
```py
from diffusers.models.attention_processor import AttnAddedKVProcessor2_0
t2i_pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0())
```
The slowest and most memory intense attention processor is the default `AttnAddedKVProcessor` processor.
We do **not** recommend using it except for testing purposes or cases where very high determistic behaviour is desired.
You can set it with:
```py
from diffusers.models.attention_processor import AttnAddedKVProcessor
t2i_pipe.unet.set_attn_processor(AttnAddedKVProcessor())
```
With PyTorch >= 2.0, you can also use Kandinsky with `torch.compile` which depending
on your hardware can signficantly speed-up your inference time once the model is compiled.
To use Kandinsksy with `torch.compile`, you can do:
```py
t2i_pipe.unet.to(memory_format=torch.channels_last)
t2i_pipe.unet = torch.compile(t2i_pipe.unet, mode="reduce-overhead", fullgraph=True)
```
After compilation you should see a very fast inference time. For more information,
feel free to have a look at [Our PyTorch 2.0 benchmark](https://huggingface.co/docs/diffusers/main/en/optimization/torch2.0).
<Tip>
To generate images directly from a single pipeline, you can use [`KandinskyV22CombinedPipeline`], [`KandinskyV22Img2ImgCombinedPipeline`], [`KandinskyV22InpaintCombinedPipeline`].
These combined pipelines wrap the [`KandinskyV22PriorPipeline`] and [`KandinskyV22Pipeline`], [`KandinskyV22Img2ImgPipeline`], [`KandinskyV22InpaintPipeline`] respectively into a single
pipeline for a simpler user experience
</Tip>
## Available Pipelines:
| Pipeline | Tasks |
|---|---|
| [pipeline_kandinsky2_2.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py) | *Text-to-Image Generation* |
| [pipeline_kandinsky2_2_combined.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py) | *End-to-end Text-to-Image, image-to-image, Inpainting Generation* |
| [pipeline_kandinsky2_2_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpaint.py) | *Image-Guided Image Generation* |
| [pipeline_kandinsky2_2_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py) | *Image-Guided Image Generation* |
| [pipeline_kandinsky2_2_controlnet.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py) | *Image-Guided Image Generation* |
| [pipeline_kandinsky2_2_controlnet_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py) | *Image-Guided Image Generation* |
### KandinskyV22Pipeline
[[autodoc]] KandinskyV22Pipeline
- all
- __call__
### KandinskyV22ControlnetPipeline
[[autodoc]] KandinskyV22ControlnetPipeline
- all
- __call__
### KandinskyV22ControlnetImg2ImgPipeline
[[autodoc]] KandinskyV22ControlnetImg2ImgPipeline
- all
- __call__
### KandinskyV22Img2ImgPipeline
[[autodoc]] KandinskyV22Img2ImgPipeline
- all
- __call__
### KandinskyV22InpaintPipeline
[[autodoc]] KandinskyV22InpaintPipeline
- all
- __call__
### KandinskyV22PriorPipeline
[[autodoc]] KandinskyV22PriorPipeline
- all
- __call__
- interpolate
### KandinskyV22PriorEmb2EmbPipeline
[[autodoc]] KandinskyV22PriorEmb2EmbPipeline
- all
- __call__
- interpolate
### KandinskyV22CombinedPipeline
[[autodoc]] KandinskyV22CombinedPipeline
- all
- __call__
### KandinskyV22Img2ImgCombinedPipeline
[[autodoc]] KandinskyV22Img2ImgCombinedPipeline
- all
- __call__
### KandinskyV22InpaintCombinedPipeline
[[autodoc]] KandinskyV22InpaintCombinedPipeline
- all
- __call__
...@@ -141,13 +141,19 @@ else: ...@@ -141,13 +141,19 @@ else:
IFPipeline, IFPipeline,
IFSuperResolutionPipeline, IFSuperResolutionPipeline,
ImageTextPipelineOutput, ImageTextPipelineOutput,
KandinskyCombinedPipeline,
KandinskyImg2ImgCombinedPipeline,
KandinskyImg2ImgPipeline, KandinskyImg2ImgPipeline,
KandinskyInpaintCombinedPipeline,
KandinskyInpaintPipeline, KandinskyInpaintPipeline,
KandinskyPipeline, KandinskyPipeline,
KandinskyPriorPipeline, KandinskyPriorPipeline,
KandinskyV22CombinedPipeline,
KandinskyV22ControlnetImg2ImgPipeline, KandinskyV22ControlnetImg2ImgPipeline,
KandinskyV22ControlnetPipeline, KandinskyV22ControlnetPipeline,
KandinskyV22Img2ImgCombinedPipeline,
KandinskyV22Img2ImgPipeline, KandinskyV22Img2ImgPipeline,
KandinskyV22InpaintCombinedPipeline,
KandinskyV22InpaintPipeline, KandinskyV22InpaintPipeline,
KandinskyV22Pipeline, KandinskyV22Pipeline,
KandinskyV22PriorEmb2EmbPipeline, KandinskyV22PriorEmb2EmbPipeline,
......
...@@ -61,15 +61,21 @@ else: ...@@ -61,15 +61,21 @@ else:
IFSuperResolutionPipeline, IFSuperResolutionPipeline,
) )
from .kandinsky import ( from .kandinsky import (
KandinskyCombinedPipeline,
KandinskyImg2ImgCombinedPipeline,
KandinskyImg2ImgPipeline, KandinskyImg2ImgPipeline,
KandinskyInpaintCombinedPipeline,
KandinskyInpaintPipeline, KandinskyInpaintPipeline,
KandinskyPipeline, KandinskyPipeline,
KandinskyPriorPipeline, KandinskyPriorPipeline,
) )
from .kandinsky2_2 import ( from .kandinsky2_2 import (
KandinskyV22CombinedPipeline,
KandinskyV22ControlnetImg2ImgPipeline, KandinskyV22ControlnetImg2ImgPipeline,
KandinskyV22ControlnetPipeline, KandinskyV22ControlnetPipeline,
KandinskyV22Img2ImgCombinedPipeline,
KandinskyV22Img2ImgPipeline, KandinskyV22Img2ImgPipeline,
KandinskyV22InpaintCombinedPipeline,
KandinskyV22InpaintPipeline, KandinskyV22InpaintPipeline,
KandinskyV22Pipeline, KandinskyV22Pipeline,
KandinskyV22PriorEmb2EmbPipeline, KandinskyV22PriorEmb2EmbPipeline,
......
...@@ -24,8 +24,22 @@ from .controlnet import ( ...@@ -24,8 +24,22 @@ from .controlnet import (
StableDiffusionXLControlNetPipeline, StableDiffusionXLControlNetPipeline,
) )
from .deepfloyd_if import IFImg2ImgPipeline, IFInpaintingPipeline, IFPipeline from .deepfloyd_if import IFImg2ImgPipeline, IFInpaintingPipeline, IFPipeline
from .kandinsky import KandinskyImg2ImgPipeline, KandinskyInpaintPipeline, KandinskyPipeline from .kandinsky import (
from .kandinsky2_2 import KandinskyV22Img2ImgPipeline, KandinskyV22InpaintPipeline, KandinskyV22Pipeline KandinskyCombinedPipeline,
KandinskyImg2ImgCombinedPipeline,
KandinskyImg2ImgPipeline,
KandinskyInpaintCombinedPipeline,
KandinskyInpaintPipeline,
KandinskyPipeline,
)
from .kandinsky2_2 import (
KandinskyV22CombinedPipeline,
KandinskyV22Img2ImgCombinedPipeline,
KandinskyV22Img2ImgPipeline,
KandinskyV22InpaintCombinedPipeline,
KandinskyV22InpaintPipeline,
KandinskyV22Pipeline,
)
from .stable_diffusion import ( from .stable_diffusion import (
StableDiffusionImg2ImgPipeline, StableDiffusionImg2ImgPipeline,
StableDiffusionInpaintPipeline, StableDiffusionInpaintPipeline,
...@@ -43,8 +57,8 @@ AUTO_TEXT2IMAGE_PIPELINES_MAPPING = OrderedDict( ...@@ -43,8 +57,8 @@ AUTO_TEXT2IMAGE_PIPELINES_MAPPING = OrderedDict(
("stable-diffusion", StableDiffusionPipeline), ("stable-diffusion", StableDiffusionPipeline),
("stable-diffusion-xl", StableDiffusionXLPipeline), ("stable-diffusion-xl", StableDiffusionXLPipeline),
("if", IFPipeline), ("if", IFPipeline),
("kandinsky", KandinskyPipeline), ("kandinsky", KandinskyCombinedPipeline),
("kandinsky22", KandinskyV22Pipeline), ("kandinsky22", KandinskyV22CombinedPipeline),
("stable-diffusion-controlnet", StableDiffusionControlNetPipeline), ("stable-diffusion-controlnet", StableDiffusionControlNetPipeline),
("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetPipeline), ("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetPipeline),
] ]
...@@ -55,8 +69,8 @@ AUTO_IMAGE2IMAGE_PIPELINES_MAPPING = OrderedDict( ...@@ -55,8 +69,8 @@ AUTO_IMAGE2IMAGE_PIPELINES_MAPPING = OrderedDict(
("stable-diffusion", StableDiffusionImg2ImgPipeline), ("stable-diffusion", StableDiffusionImg2ImgPipeline),
("stable-diffusion-xl", StableDiffusionXLImg2ImgPipeline), ("stable-diffusion-xl", StableDiffusionXLImg2ImgPipeline),
("if", IFImg2ImgPipeline), ("if", IFImg2ImgPipeline),
("kandinsky", KandinskyImg2ImgPipeline), ("kandinsky", KandinskyImg2ImgCombinedPipeline),
("kandinsky22", KandinskyV22Img2ImgPipeline), ("kandinsky22", KandinskyV22Img2ImgCombinedPipeline),
("stable-diffusion-controlnet", StableDiffusionControlNetImg2ImgPipeline), ("stable-diffusion-controlnet", StableDiffusionControlNetImg2ImgPipeline),
] ]
) )
...@@ -66,9 +80,28 @@ AUTO_INPAINT_PIPELINES_MAPPING = OrderedDict( ...@@ -66,9 +80,28 @@ AUTO_INPAINT_PIPELINES_MAPPING = OrderedDict(
("stable-diffusion", StableDiffusionInpaintPipeline), ("stable-diffusion", StableDiffusionInpaintPipeline),
("stable-diffusion-xl", StableDiffusionXLInpaintPipeline), ("stable-diffusion-xl", StableDiffusionXLInpaintPipeline),
("if", IFInpaintingPipeline), ("if", IFInpaintingPipeline),
("kandinsky", KandinskyInpaintCombinedPipeline),
("kandinsky22", KandinskyV22InpaintCombinedPipeline),
("stable-diffusion-controlnet", StableDiffusionControlNetInpaintPipeline),
]
)
_AUTO_TEXT2IMAGE_DECODER_PIPELINES_MAPPING = OrderedDict(
[
("kandinsky", KandinskyPipeline),
("kandinsky22", KandinskyV22Pipeline),
]
)
_AUTO_IMAGE2IMAGE_DECODER_PIPELINES_MAPPING = OrderedDict(
[
("kandinsky", KandinskyImg2ImgPipeline),
("kandinsky22", KandinskyV22Img2ImgPipeline),
]
)
_AUTO_INPAINT_DECODER_PIPELINES_MAPPING = OrderedDict(
[
("kandinsky", KandinskyInpaintPipeline), ("kandinsky", KandinskyInpaintPipeline),
("kandinsky22", KandinskyV22InpaintPipeline), ("kandinsky22", KandinskyV22InpaintPipeline),
("stable-diffusion-controlnet", StableDiffusionControlNetInpaintPipeline),
] ]
) )
...@@ -76,10 +109,27 @@ SUPPORTED_TASKS_MAPPINGS = [ ...@@ -76,10 +109,27 @@ SUPPORTED_TASKS_MAPPINGS = [
AUTO_TEXT2IMAGE_PIPELINES_MAPPING, AUTO_TEXT2IMAGE_PIPELINES_MAPPING,
AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING,
AUTO_INPAINT_PIPELINES_MAPPING, AUTO_INPAINT_PIPELINES_MAPPING,
_AUTO_TEXT2IMAGE_DECODER_PIPELINES_MAPPING,
_AUTO_IMAGE2IMAGE_DECODER_PIPELINES_MAPPING,
_AUTO_INPAINT_DECODER_PIPELINES_MAPPING,
] ]
def _get_task_class(mapping, pipeline_class_name): def _get_connected_pipeline(pipeline_cls):
# for now connected pipelines can only be loaded from decoder pipelines, such as kandinsky-community/kandinsky-2-2-decoder
if pipeline_cls in _AUTO_TEXT2IMAGE_DECODER_PIPELINES_MAPPING.values():
return _get_task_class(
AUTO_TEXT2IMAGE_PIPELINES_MAPPING, pipeline_cls.__name__, throw_error_if_not_exist=False
)
if pipeline_cls in _AUTO_IMAGE2IMAGE_DECODER_PIPELINES_MAPPING.values():
return _get_task_class(
AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, pipeline_cls.__name__, throw_error_if_not_exist=False
)
if pipeline_cls in _AUTO_INPAINT_DECODER_PIPELINES_MAPPING.values():
return _get_task_class(AUTO_INPAINT_PIPELINES_MAPPING, pipeline_cls.__name__, throw_error_if_not_exist=False)
def _get_task_class(mapping, pipeline_class_name, throw_error_if_not_exist: bool = True):
def get_model(pipeline_class_name): def get_model(pipeline_class_name):
for task_mapping in SUPPORTED_TASKS_MAPPINGS: for task_mapping in SUPPORTED_TASKS_MAPPINGS:
for model_name, pipeline in task_mapping.items(): for model_name, pipeline in task_mapping.items():
...@@ -92,6 +142,8 @@ def _get_task_class(mapping, pipeline_class_name): ...@@ -92,6 +142,8 @@ def _get_task_class(mapping, pipeline_class_name):
task_class = mapping.get(model_name, None) task_class = mapping.get(model_name, None)
if task_class is not None: if task_class is not None:
return task_class return task_class
if throw_error_if_not_exist:
raise ValueError(f"AutoPipeline can't find a pipeline linked to {pipeline_class_name} for {model_name}") raise ValueError(f"AutoPipeline can't find a pipeline linked to {pipeline_class_name} for {model_name}")
...@@ -336,7 +388,7 @@ class AutoPipelineForText2Image(ConfigMixin): ...@@ -336,7 +388,7 @@ class AutoPipelineForText2Image(ConfigMixin):
if len(missing_modules) > 0: if len(missing_modules) > 0:
raise ValueError( raise ValueError(
f"Pipeline {text_2_image_cls} expected {expected_modules}, but only {set(passed_class_obj.keys()) + set(original_class_obj.keys())} were passed" f"Pipeline {text_2_image_cls} expected {expected_modules}, but only {set(list(passed_class_obj.keys()) + list(original_class_obj.keys()))} were passed"
) )
model = text_2_image_cls(**text_2_image_kwargs) model = text_2_image_cls(**text_2_image_kwargs)
...@@ -581,7 +633,7 @@ class AutoPipelineForImage2Image(ConfigMixin): ...@@ -581,7 +633,7 @@ class AutoPipelineForImage2Image(ConfigMixin):
if len(missing_modules) > 0: if len(missing_modules) > 0:
raise ValueError( raise ValueError(
f"Pipeline {image_2_image_cls} expected {expected_modules}, but only {set(passed_class_obj.keys()) + set(original_class_obj.keys())} were passed" f"Pipeline {image_2_image_cls} expected {expected_modules}, but only {set(list(passed_class_obj.keys()) + list(original_class_obj.keys()))} were passed"
) )
model = image_2_image_cls(**image_2_image_kwargs) model = image_2_image_cls(**image_2_image_kwargs)
...@@ -824,7 +876,7 @@ class AutoPipelineForInpainting(ConfigMixin): ...@@ -824,7 +876,7 @@ class AutoPipelineForInpainting(ConfigMixin):
if len(missing_modules) > 0: if len(missing_modules) > 0:
raise ValueError( raise ValueError(
f"Pipeline {inpainting_cls} expected {expected_modules}, but only {set(passed_class_obj.keys()) + set(original_class_obj.keys())} were passed" f"Pipeline {inpainting_cls} expected {expected_modules}, but only {set(list(passed_class_obj.keys()) + list(original_class_obj.keys()))} were passed"
) )
model = inpainting_cls(**inpainting_kwargs) model = inpainting_cls(**inpainting_kwargs)
......
...@@ -12,6 +12,11 @@ except OptionalDependencyNotAvailable: ...@@ -12,6 +12,11 @@ except OptionalDependencyNotAvailable:
from ...utils.dummy_torch_and_transformers_objects import * from ...utils.dummy_torch_and_transformers_objects import *
else: else:
from .pipeline_kandinsky import KandinskyPipeline from .pipeline_kandinsky import KandinskyPipeline
from .pipeline_kandinsky_combined import (
KandinskyCombinedPipeline,
KandinskyImg2ImgCombinedPipeline,
KandinskyInpaintCombinedPipeline,
)
from .pipeline_kandinsky_img2img import KandinskyImg2ImgPipeline from .pipeline_kandinsky_img2img import KandinskyImg2ImgPipeline
from .pipeline_kandinsky_inpaint import KandinskyInpaintPipeline from .pipeline_kandinsky_inpaint import KandinskyInpaintPipeline
from .pipeline_kandinsky_prior import KandinskyPriorPipeline, KandinskyPriorPipelineOutput from .pipeline_kandinsky_prior import KandinskyPriorPipeline, KandinskyPriorPipelineOutput
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import torch import torch
from transformers import ( from transformers import (
...@@ -269,6 +269,8 @@ class KandinskyPipeline(DiffusionPipeline): ...@@ -269,6 +269,8 @@ class KandinskyPipeline(DiffusionPipeline):
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
...@@ -309,6 +311,12 @@ class KandinskyPipeline(DiffusionPipeline): ...@@ -309,6 +311,12 @@ class KandinskyPipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -397,6 +405,10 @@ class KandinskyPipeline(DiffusionPipeline): ...@@ -397,6 +405,10 @@ class KandinskyPipeline(DiffusionPipeline):
latents, latents,
generator=generator, generator=generator,
).prev_sample ).prev_sample
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# post-processing # post-processing
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import numpy as np import numpy as np
import PIL import PIL
...@@ -332,6 +332,8 @@ class KandinskyImg2ImgPipeline(DiffusionPipeline): ...@@ -332,6 +332,8 @@ class KandinskyImg2ImgPipeline(DiffusionPipeline):
num_images_per_prompt: int = 1, num_images_per_prompt: int = 1,
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
...@@ -377,6 +379,12 @@ class KandinskyImg2ImgPipeline(DiffusionPipeline): ...@@ -377,6 +379,12 @@ class KandinskyImg2ImgPipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -491,6 +499,9 @@ class KandinskyImg2ImgPipeline(DiffusionPipeline): ...@@ -491,6 +499,9 @@ class KandinskyImg2ImgPipeline(DiffusionPipeline):
generator=generator, generator=generator,
).prev_sample ).prev_sample
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# 7. post-processing # 7. post-processing
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
......
...@@ -13,17 +13,19 @@ ...@@ -13,17 +13,19 @@
# limitations under the License. # limitations under the License.
from copy import deepcopy from copy import deepcopy
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import numpy as np import numpy as np
import PIL import PIL
import torch import torch
import torch.nn.functional as F import torch.nn.functional as F
from packaging import version
from PIL import Image from PIL import Image
from transformers import ( from transformers import (
XLMRobertaTokenizer, XLMRobertaTokenizer,
) )
from ... import __version__
from ...models import UNet2DConditionModel, VQModel from ...models import UNet2DConditionModel, VQModel
from ...schedulers import DDIMScheduler from ...schedulers import DDIMScheduler
from ...utils import ( from ...utils import (
...@@ -65,8 +67,8 @@ EXAMPLE_DOC_STRING = """ ...@@ -65,8 +67,8 @@ EXAMPLE_DOC_STRING = """
... "/kandinsky/cat.png" ... "/kandinsky/cat.png"
... ) ... )
>>> mask = np.ones((768, 768), dtype=np.float32) >>> mask = np.zeros((768, 768), dtype=np.float32)
>>> mask[:250, 250:-250] = 0 >>> mask[:250, 250:-250] = 1
>>> out = pipe( >>> out = pipe(
... prompt, ... prompt,
...@@ -232,6 +234,8 @@ def prepare_mask_and_masked_image(image, mask, height, width): ...@@ -232,6 +234,8 @@ def prepare_mask_and_masked_image(image, mask, height, width):
mask[mask >= 0.5] = 1 mask[mask >= 0.5] = 1
mask = torch.from_numpy(mask) mask = torch.from_numpy(mask)
mask = 1 - mask
return mask, image return mask, image
...@@ -273,6 +277,7 @@ class KandinskyInpaintPipeline(DiffusionPipeline): ...@@ -273,6 +277,7 @@ class KandinskyInpaintPipeline(DiffusionPipeline):
scheduler=scheduler, scheduler=scheduler,
) )
self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1)
self._warn_has_been_called = False
# Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
...@@ -431,6 +436,8 @@ class KandinskyInpaintPipeline(DiffusionPipeline): ...@@ -431,6 +436,8 @@ class KandinskyInpaintPipeline(DiffusionPipeline):
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
...@@ -482,6 +489,12 @@ class KandinskyInpaintPipeline(DiffusionPipeline): ...@@ -482,6 +489,12 @@ class KandinskyInpaintPipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -490,6 +503,18 @@ class KandinskyInpaintPipeline(DiffusionPipeline): ...@@ -490,6 +503,18 @@ class KandinskyInpaintPipeline(DiffusionPipeline):
Returns: Returns:
[`~pipelines.ImagePipelineOutput`] or `tuple` [`~pipelines.ImagePipelineOutput`] or `tuple`
""" """
if not self._warn_has_been_called and version.parse(version.parse(__version__).base_version) < version.parse(
"0.22.0.dev0"
):
logger.warn(
"Please note that the expected format of `mask_image` has recently been changed. "
"Before diffusers == 0.19.0, Kandinsky Inpainting pipelines repainted black pixels and preserved black pixels. "
"As of diffusers==0.19.0 this behavior has been inverted. Now white pixels are repainted and black pixels are preserved. "
"This way, Kandinsky's masking behavior is aligned with Stable Diffusion. "
"THIS means that you HAVE to invert the input mask to have the same behavior as before as explained in https://github.com/huggingface/diffusers/pull/4207. "
"This warning will be surpressed after the first inference call and will be removed in diffusers>0.22.0"
)
self._warn_has_been_called = True
# Define call parameters # Define call parameters
if isinstance(prompt, str): if isinstance(prompt, str):
...@@ -609,6 +634,9 @@ class KandinskyInpaintPipeline(DiffusionPipeline): ...@@ -609,6 +634,9 @@ class KandinskyInpaintPipeline(DiffusionPipeline):
generator=generator, generator=generator,
).prev_sample ).prev_sample
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# post-processing # post-processing
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
......
...@@ -24,6 +24,8 @@ from ...models import PriorTransformer ...@@ -24,6 +24,8 @@ from ...models import PriorTransformer
from ...schedulers import UnCLIPScheduler from ...schedulers import UnCLIPScheduler
from ...utils import ( from ...utils import (
BaseOutput, BaseOutput,
is_accelerate_available,
is_accelerate_version,
logging, logging,
randn_tensor, randn_tensor,
replace_example_docstring, replace_example_docstring,
...@@ -179,7 +181,7 @@ class KandinskyPriorPipeline(DiffusionPipeline): ...@@ -179,7 +181,7 @@ class KandinskyPriorPipeline(DiffusionPipeline):
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
negative_prior_prompt: Optional[str] = None, negative_prior_prompt: Optional[str] = None,
negative_prompt: Union[str] = "", negative_prompt: str = "",
guidance_scale: float = 4.0, guidance_scale: float = 4.0,
device=None, device=None,
): ):
...@@ -393,6 +395,35 @@ class KandinskyPriorPipeline(DiffusionPipeline): ...@@ -393,6 +395,35 @@ class KandinskyPriorPipeline(DiffusionPipeline):
return prompt_embeds, text_encoder_hidden_states, text_mask return prompt_embeds, text_encoder_hidden_states, text_mask
def enable_model_cpu_offload(self, gpu_id=0):
r"""
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
`enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
"""
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
from accelerate import cpu_offload_with_hook
else:
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
device = torch.device(f"cuda:{gpu_id}")
if self.device.type != "cpu":
self.to("cpu", silence_dtype_warnings=True)
torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
hook = None
for cpu_offloaded_model in [self.text_encoder, self.prior]:
_, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
# We'll offload the last model manually.
self.prior_hook = hook
_, hook = cpu_offload_with_hook(self.image_encoder, device, prev_module_hook=self.prior_hook)
self.final_offload_hook = hook
@torch.no_grad() @torch.no_grad()
@replace_example_docstring(EXAMPLE_DOC_STRING) @replace_example_docstring(EXAMPLE_DOC_STRING)
def __call__( def __call__(
...@@ -525,9 +556,15 @@ class KandinskyPriorPipeline(DiffusionPipeline): ...@@ -525,9 +556,15 @@ class KandinskyPriorPipeline(DiffusionPipeline):
# if negative prompt has been defined, we retrieve split the image embedding into two # if negative prompt has been defined, we retrieve split the image embedding into two
if negative_prompt is None: if negative_prompt is None:
zero_embeds = self.get_zero_embed(latents.shape[0], device=latents.device) zero_embeds = self.get_zero_embed(latents.shape[0], device=latents.device)
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.final_offload_hook.offload()
else: else:
image_embeddings, zero_embeds = image_embeddings.chunk(2) image_embeddings, zero_embeds = image_embeddings.chunk(2)
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.prior_hook.offload()
if output_type not in ["pt", "np"]: if output_type not in ["pt", "np"]:
raise ValueError(f"Only the output types `pt` and `np` are supported not output_type={output_type}") raise ValueError(f"Only the output types `pt` and `np` are supported not output_type={output_type}")
......
...@@ -12,6 +12,11 @@ except OptionalDependencyNotAvailable: ...@@ -12,6 +12,11 @@ except OptionalDependencyNotAvailable:
from ...utils.dummy_torch_and_transformers_objects import * from ...utils.dummy_torch_and_transformers_objects import *
else: else:
from .pipeline_kandinsky2_2 import KandinskyV22Pipeline from .pipeline_kandinsky2_2 import KandinskyV22Pipeline
from .pipeline_kandinsky2_2_combined import (
KandinskyV22CombinedPipeline,
KandinskyV22Img2ImgCombinedPipeline,
KandinskyV22InpaintCombinedPipeline,
)
from .pipeline_kandinsky2_2_controlnet import KandinskyV22ControlnetPipeline from .pipeline_kandinsky2_2_controlnet import KandinskyV22ControlnetPipeline
from .pipeline_kandinsky2_2_controlnet_img2img import KandinskyV22ControlnetImg2ImgPipeline from .pipeline_kandinsky2_2_controlnet_img2img import KandinskyV22ControlnetImg2ImgPipeline
from .pipeline_kandinsky2_2_img2img import KandinskyV22Img2ImgPipeline from .pipeline_kandinsky2_2_img2img import KandinskyV22Img2ImgPipeline
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import torch import torch
...@@ -148,11 +148,14 @@ class KandinskyV22Pipeline(DiffusionPipeline): ...@@ -148,11 +148,14 @@ class KandinskyV22Pipeline(DiffusionPipeline):
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
Args:
Function invoked when calling the pipeline for generation. Function invoked when calling the pipeline for generation.
Args:
image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
The clip image embeddings for text prompt, that will be used to condition the image generation. The clip image embeddings for text prompt, that will be used to condition the image generation.
negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
...@@ -182,6 +185,12 @@ class KandinskyV22Pipeline(DiffusionPipeline): ...@@ -182,6 +185,12 @@ class KandinskyV22Pipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -258,9 +267,16 @@ class KandinskyV22Pipeline(DiffusionPipeline): ...@@ -258,9 +267,16 @@ class KandinskyV22Pipeline(DiffusionPipeline):
latents, latents,
generator=generator, generator=generator,
)[0] )[0]
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# post-processing # post-processing
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
# Offload last model to CPU
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.final_offload_hook.offload()
if output_type not in ["pt", "np", "pil"]: if output_type not in ["pt", "np", "pil"]:
raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}")
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import torch import torch
...@@ -190,6 +190,8 @@ class KandinskyV22ControlnetPipeline(DiffusionPipeline): ...@@ -190,6 +190,8 @@ class KandinskyV22ControlnetPipeline(DiffusionPipeline):
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
...@@ -232,6 +234,12 @@ class KandinskyV22ControlnetPipeline(DiffusionPipeline): ...@@ -232,6 +234,12 @@ class KandinskyV22ControlnetPipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -313,9 +321,16 @@ class KandinskyV22ControlnetPipeline(DiffusionPipeline): ...@@ -313,9 +321,16 @@ class KandinskyV22ControlnetPipeline(DiffusionPipeline):
latents, latents,
generator=generator, generator=generator,
)[0] )[0]
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# post-processing # post-processing
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
# Offload last model to CPU
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.final_offload_hook.offload()
if output_type not in ["pt", "np", "pil"]: if output_type not in ["pt", "np", "pil"]:
raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}")
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import numpy as np import numpy as np
import PIL import PIL
...@@ -246,6 +246,8 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline): ...@@ -246,6 +246,8 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline):
num_images_per_prompt: int = 1, num_images_per_prompt: int = 1,
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
...@@ -289,6 +291,12 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline): ...@@ -289,6 +291,12 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -374,9 +382,16 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline): ...@@ -374,9 +382,16 @@ class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline):
generator=generator, generator=generator,
)[0] )[0]
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# post-processing # post-processing
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
# Offload last model to CPU
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.final_offload_hook.offload()
if output_type not in ["pt", "np", "pil"]: if output_type not in ["pt", "np", "pil"]:
raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}")
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import numpy as np import numpy as np
import PIL import PIL
...@@ -218,6 +218,8 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline): ...@@ -218,6 +218,8 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline):
num_images_per_prompt: int = 1, num_images_per_prompt: int = 1,
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
...@@ -259,6 +261,12 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline): ...@@ -259,6 +261,12 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -338,9 +346,16 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline): ...@@ -338,9 +346,16 @@ class KandinskyV22Img2ImgPipeline(DiffusionPipeline):
generator=generator, generator=generator,
)[0] )[0]
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# post-processing # post-processing
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
# Offload last model to CPU
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.final_offload_hook.offload()
if output_type not in ["pt", "np", "pil"]: if output_type not in ["pt", "np", "pil"]:
raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}")
......
...@@ -13,14 +13,16 @@ ...@@ -13,14 +13,16 @@
# limitations under the License. # limitations under the License.
from copy import deepcopy from copy import deepcopy
from typing import List, Optional, Union from typing import Callable, List, Optional, Union
import numpy as np import numpy as np
import PIL import PIL
import torch import torch
import torch.nn.functional as F import torch.nn.functional as F
from packaging import version
from PIL import Image from PIL import Image
from ... import __version__
from ...models import UNet2DConditionModel, VQModel from ...models import UNet2DConditionModel, VQModel
from ...schedulers import DDPMScheduler from ...schedulers import DDPMScheduler
from ...utils import ( from ...utils import (
...@@ -61,8 +63,8 @@ EXAMPLE_DOC_STRING = """ ...@@ -61,8 +63,8 @@ EXAMPLE_DOC_STRING = """
... "/kandinsky/cat.png" ... "/kandinsky/cat.png"
... ) ... )
>>> mask = np.ones((768, 768), dtype=np.float32) >>> mask = np.zeros((768, 768), dtype=np.float32)
>>> mask[:250, 250:-250] = 0 >>> mask[:250, 250:-250] = 1
>>> out = pipe( >>> out = pipe(
... image=init_image, ... image=init_image,
...@@ -230,6 +232,8 @@ def prepare_mask_and_masked_image(image, mask, height, width): ...@@ -230,6 +232,8 @@ def prepare_mask_and_masked_image(image, mask, height, width):
mask[mask >= 0.5] = 1 mask[mask >= 0.5] = 1
mask = torch.from_numpy(mask) mask = torch.from_numpy(mask)
mask = 1 - mask
return mask, image return mask, image
...@@ -263,6 +267,7 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline): ...@@ -263,6 +267,7 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline):
movq=movq, movq=movq,
) )
self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1)
self._warn_has_been_called = False
# Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
...@@ -318,19 +323,22 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline): ...@@ -318,19 +323,22 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline):
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil", output_type: Optional[str] = "pil",
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: int = 1,
return_dict: bool = True, return_dict: bool = True,
): ):
""" """
Args:
Function invoked when calling the pipeline for generation. Function invoked when calling the pipeline for generation.
Args:
image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
The clip image embeddings for text prompt, that will be used to condition the image generation. The clip image embeddings for text prompt, that will be used to condition the image generation.
image (`PIL.Image.Image`): image (`PIL.Image.Image`):
`Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
be masked out with `mask_image` and repainted according to `prompt`. be masked out with `mask_image` and repainted according to `prompt`.
mask_image (`np.array`): mask_image (`np.array`):
Tensor representing an image batch, to mask `image`. Black pixels in the mask will be repainted, while Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
white pixels will be preserved. If `mask_image` is a PIL image, it will be converted to a single black pixels will be preserved. If `mask_image` is a PIL image, it will be converted to a single
channel (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3, channel (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3,
so the expected shape would be `(B, H, W, 1)`. so the expected shape would be `(B, H, W, 1)`.
negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
...@@ -360,6 +368,12 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline): ...@@ -360,6 +368,12 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline):
output_type (`str`, *optional*, defaults to `"pil"`): output_type (`str`, *optional*, defaults to `"pil"`):
The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
(`np.array`) or `"pt"` (`torch.Tensor`). (`np.array`) or `"pt"` (`torch.Tensor`).
callback (`Callable`, *optional*):
A function that calls every `callback_steps` steps during inference. The function is called with the
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
callback_steps (`int`, *optional*, defaults to 1):
The frequency at which the `callback` function is called. If not specified, the callback is called at
every step.
return_dict (`bool`, *optional*, defaults to `True`): return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
...@@ -368,6 +382,19 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline): ...@@ -368,6 +382,19 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline):
Returns: Returns:
[`~pipelines.ImagePipelineOutput`] or `tuple` [`~pipelines.ImagePipelineOutput`] or `tuple`
""" """
if not self._warn_has_been_called and version.parse(version.parse(__version__).base_version) < version.parse(
"0.22.0.dev0"
):
logger.warn(
"Please note that the expected format of `mask_image` has recently been changed. "
"Before diffusers == 0.19.0, Kandinsky Inpainting pipelines repainted black pixels and preserved black pixels. "
"As of diffusers==0.19.0 this behavior has been inverted. Now white pixels are repainted and black pixels are preserved. "
"This way, Kandinsky's masking behavior is aligned with Stable Diffusion. "
"THIS means that you HAVE to invert the input mask to have the same behavior as before as explained in https://github.com/huggingface/diffusers/pull/4207. "
"This warning will be surpressed after the first inference call and will be removed in diffusers>0.22.0"
)
self._warn_has_been_called = True
device = self._execution_device device = self._execution_device
do_classifier_free_guidance = guidance_scale > 1.0 do_classifier_free_guidance = guidance_scale > 1.0
...@@ -470,10 +497,18 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline): ...@@ -470,10 +497,18 @@ class KandinskyV22InpaintPipeline(DiffusionPipeline):
) )
latents = init_mask * init_latents_proper + (1 - init_mask) * latents latents = init_mask * init_latents_proper + (1 - init_mask) * latents
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
# post-processing # post-processing
latents = mask_image[:1] * image[:1] + (1 - mask_image[:1]) * latents latents = mask_image[:1] * image[:1] + (1 - mask_image[:1]) * latents
image = self.movq.decode(latents, force_not_quantize=True)["sample"] image = self.movq.decode(latents, force_not_quantize=True)["sample"]
# Offload last model to CPU
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.final_offload_hook.offload()
if output_type not in ["pt", "np", "pil"]: if output_type not in ["pt", "np", "pil"]:
raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}")
......
...@@ -7,6 +7,8 @@ from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTo ...@@ -7,6 +7,8 @@ from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTo
from ...models import PriorTransformer from ...models import PriorTransformer
from ...schedulers import UnCLIPScheduler from ...schedulers import UnCLIPScheduler
from ...utils import ( from ...utils import (
is_accelerate_available,
is_accelerate_version,
logging, logging,
randn_tensor, randn_tensor,
replace_example_docstring, replace_example_docstring,
...@@ -137,7 +139,7 @@ class KandinskyV22PriorPipeline(DiffusionPipeline): ...@@ -137,7 +139,7 @@ class KandinskyV22PriorPipeline(DiffusionPipeline):
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None, latents: Optional[torch.FloatTensor] = None,
negative_prior_prompt: Optional[str] = None, negative_prior_prompt: Optional[str] = None,
negative_prompt: Union[str] = "", negative_prompt: str = "",
guidance_scale: float = 4.0, guidance_scale: float = 4.0,
device=None, device=None,
): ):
...@@ -353,6 +355,35 @@ class KandinskyV22PriorPipeline(DiffusionPipeline): ...@@ -353,6 +355,35 @@ class KandinskyV22PriorPipeline(DiffusionPipeline):
return prompt_embeds, text_encoder_hidden_states, text_mask return prompt_embeds, text_encoder_hidden_states, text_mask
def enable_model_cpu_offload(self, gpu_id=0):
r"""
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
`enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
"""
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
from accelerate import cpu_offload_with_hook
else:
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
device = torch.device(f"cuda:{gpu_id}")
if self.device.type != "cpu":
self.to("cpu", silence_dtype_warnings=True)
torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
hook = None
for cpu_offloaded_model in [self.text_encoder, self.prior]:
_, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
# We'll offload the last model manually.
self.prior_hook = hook
_, hook = cpu_offload_with_hook(self.image_encoder, device, prev_module_hook=self.prior_hook)
self.final_offload_hook = hook
@torch.no_grad() @torch.no_grad()
@replace_example_docstring(EXAMPLE_DOC_STRING) @replace_example_docstring(EXAMPLE_DOC_STRING)
def __call__( def __call__(
...@@ -485,9 +516,15 @@ class KandinskyV22PriorPipeline(DiffusionPipeline): ...@@ -485,9 +516,15 @@ class KandinskyV22PriorPipeline(DiffusionPipeline):
# if negative prompt has been defined, we retrieve split the image embedding into two # if negative prompt has been defined, we retrieve split the image embedding into two
if negative_prompt is None: if negative_prompt is None:
zero_embeds = self.get_zero_embed(latents.shape[0], device=latents.device) zero_embeds = self.get_zero_embed(latents.shape[0], device=latents.device)
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.final_offload_hook.offload()
else: else:
image_embeddings, zero_embeds = image_embeddings.chunk(2) image_embeddings, zero_embeds = image_embeddings.chunk(2)
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
self.prior_hook.offload()
if output_type not in ["pt", "np"]: if output_type not in ["pt", "np"]:
raise ValueError(f"Only the output types `pt` and `np` are supported not output_type={output_type}") raise ValueError(f"Only the output types `pt` and `np` are supported not output_type={output_type}")
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment