Unverified Commit 540399f5 authored by YiYi Xu's avatar YiYi Xu Committed by GitHub
Browse files

add PAG support (#7944)



* first draft


---------
Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
Co-authored-by: default avatarJunhwa Song <ethan9867@gmail.com>
Co-authored-by: default avatarAhn Donghoon (안동훈 / suno) <suno.vivid@gmail.com>
Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
parent f088027e
...@@ -81,6 +81,8 @@ ...@@ -81,6 +81,8 @@
title: Kandinsky title: Kandinsky
- local: using-diffusers/ip_adapter - local: using-diffusers/ip_adapter
title: IP-Adapter title: IP-Adapter
- local: using-diffusers/pag
title: PAG
- local: using-diffusers/controlnet - local: using-diffusers/controlnet
title: ControlNet title: ControlNet
- local: using-diffusers/t2i_adapter - local: using-diffusers/t2i_adapter
...@@ -322,6 +324,8 @@ ...@@ -322,6 +324,8 @@
title: MultiDiffusion title: MultiDiffusion
- local: api/pipelines/musicldm - local: api/pipelines/musicldm
title: MusicLDM title: MusicLDM
- local: api/pipelines/pag
title: PAG
- local: api/pipelines/paint_by_example - local: api/pipelines/paint_by_example
title: Paint by Example title: Paint by Example
- local: api/pipelines/pia - local: api/pipelines/pia
......
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Perturbed-Attention Guidance
[Perturbed-Attention Guidance (PAG)](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) is a new diffusion sampling guidance that improves sample quality across both unconditional and conditional settings, achieving this without requiring further training or the integration of external modules.
PAG was introduced in [Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance](https://huggingface.co/papers/2403.17377) by Donghoon Ahn, Hyoungwon Cho, Jaewon Min, Wooseok Jang, Jungwoo Kim, SeonHwa Kim, Hyun Hee Park, Kyong Hwan Jin and Seungryong Kim.
The abstract from the paper is:
*Recent studies have demonstrated that diffusion models are capable of generating high-quality samples, but their quality heavily depends on sampling guidance techniques, such as classifier guidance (CG) and classifier-free guidance (CFG). These techniques are often not applicable in unconditional generation or in various downstream tasks such as image restoration. In this paper, we propose a novel sampling guidance, called Perturbed-Attention Guidance (PAG), which improves diffusion sample quality across both unconditional and conditional settings, achieving this without requiring additional training or the integration of external modules. PAG is designed to progressively enhance the structure of samples throughout the denoising process. It involves generating intermediate samples with degraded structure by substituting selected self-attention maps in diffusion U-Net with an identity matrix, by considering the self-attention mechanisms' ability to capture structural information, and guiding the denoising process away from these degraded samples. In both ADM and Stable Diffusion, PAG surprisingly improves sample quality in conditional and even unconditional scenarios. Moreover, PAG significantly improves the baseline performance in various downstream tasks where existing guidances such as CG or CFG cannot be fully utilized, including ControlNet with empty prompts and image restoration such as inpainting and deblurring.*
## StableDiffusionXLPAGPipeline
[[autodoc]] StableDiffusionXLPAGPipeline
- all
- __call__
## StableDiffusionXLPAGImg2ImgPipeline
[[autodoc]] StableDiffusionXLPAGImg2ImgPipeline
- all
- __call__
## StableDiffusionXLPAGInpaintPipeline
[[autodoc]] StableDiffusionXLPAGInpaintPipeline
- all
- __call__
## StableDiffusionXLControlNetPAGPipeline
[[autodoc]] StableDiffusionXLControlNetPAGPipeline
- all
- __call__
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Perturbed-Attention Guidance
[Perturbed-Attention Guidance (PAG)](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) is a new diffusion sampling guidance that improves sample quality across both unconditional and conditional settings, achieving this without requiring further training or the integration of external modules. PAG is designed to progressively enhance the structure of synthesized samples throughout the denoising process by considering the self-attention mechanisms' ability to capture structural information. It involves generating intermediate samples with degraded structure by substituting selected self-attention maps in diffusion U-Net with an identity matrix, and guiding the denoising process away from these degraded samples.
This guide will show you how to use PAG for various tasks and use cases.
## General tasks
You can apply PAG to the [`StableDiffusionXLPipeline`] for tasks such as text-to-image, image-to-image, and inpainting. To enable PAG for a specific task, load the pipeline using the [AutoPipeline](../api/pipelines/auto_pipeline) API with the `enable_pag=True` flag and the `pag_applied_layers` argument.
> [!TIP]
> 🤗 Diffusers currently only supports using PAG with selected SDXL pipelines, but feel free to open a [feature request](https://github.com/huggingface/diffusers/issues/new/choose) if you want to add PAG support to a new pipeline!
<hfoptions id="tasks">
<hfoption id="Text-to-image">
```py
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
enable_pag=True,
pag_applied_layers=["mid"],
torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
```
> [!TIP]
> The `pag_applied_layers` argument allows you to specify which layers PAG is applied to. Additionally, you can use `set_pag_applied_layers` method to update these layers after the pipeline has been created. Check out the [pag_applied_layers](#pag_applied_layers) section to learn more about applying PAG to other layers.
To generate an image, you will also need to pass a `pag_scale`. When `pag_scale` increases, images gain more semantically coherent structures and exhibit fewer artifacts. However overly large guidance scale can lead to smoother textures and slight saturation in the images, similarly to CFG. `pag_scale=3.0` is used in the official demo and works well in most of the use cases, but feel free to experiment and select the appropriate value according to your needs! PAG is disabled when `pag_scale=0`.
```py
prompt = "an insect robot preparing a delicious meal, anime style"
for pag_scale in [0.0, 3.0]:
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt=prompt,
num_inference_steps=25,
guidance_scale=7.0,
generator=generator,
pag_scale=pag_scale,
).images
```
<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_cfg_7.0_mid.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
</div>
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_mid.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
</div>
</div>
</hfoption>
<hfoption id="Image-to-image">
Similary, you can use PAG with image-to-image pipelines.
```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image
import torch
pipeline = AutoPipelineForImage2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
enable_pag=True,
pag_applied_layers=["mid"],
torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
pag_scales = 4.0
guidance_scales = 7.0
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
init_image = load_image(url)
prompt = "a dog catching a frisbee in the jungle"
generator = torch.Generator(device="cpu").manual_seed(0)
image = pipeline(
prompt,
image=init_image,
strength=0.8,
guidance_scale=guidance_scale,
pag_scale=pag_scale,
generator=generator).images[0]
```
</hfoption>
<hfoption id="Inpainting">
```py
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch
pipeline = AutoPipelineForInpainting.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
enable_pag=True,
torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = load_image(img_url).convert("RGB")
mask_image = load_image(mask_url).convert("RGB")
prompt = "A majestic tiger sitting on a bench"
pag_scales = 3.0
guidance_scales = 7.5
generator = torch.Generator(device="cpu").manual_seed(1)
images = pipeline(
prompt=prompt,
image=init_image,
mask_image=mask_image,
strength=0.8,
num_inference_steps=50,
guidance_scale=guidance_scale,
generator=generator,
pag_scale=pag_scale,
).images
images[0]
```
</hfoption>
</hfoptions>
## PAG with ControlNet
To use PAG with ControlNet, first create a `controlnet`. Then, pass the `controlnet` and other PAG arguments to the `from_pretrained` method of the AutoPipeline for the specified task.
```py
from diffusers import AutoPipelineForText2Image, ControlNetModel
import torch
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
)
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
enable_pag=True,
pag_applied_layers="mid",
torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
```
You can use the pipeline in the same way you normally use ControlNet pipelines, with the added option to specify a `pag_scale` parameter. Note that PAG works well for unconditional generation. In this example, we will generate an image without a prompt.
```py
from diffusers.utils import load_image
canny_image = load_image(
"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_control_input.png"
)
for pag_scale in [0.0, 3.0]:
generator = torch.Generator(device="cpu").manual_seed(1)
images = pipeline(
prompt="",
controlnet_conditioning_scale=controlnet_conditioning_scale,
image=canny_image,
num_inference_steps=50,
guidance_scale=0,
generator=generator,
pag_scale=pag_scale,
).images
images[0]
```
<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_controlnet.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
</div>
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_controlnet.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
</div>
</div>
## PAG with IP-Adapter
[IP-Adapter](https://hf.co/papers/2308.06721) is a popular model that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. You can enable PAG on a pipeline with IP-Adapter loaded.
```py
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image
from transformers import CLIPVisionModelWithProjection
import torch
image_encoder = CLIPVisionModelWithProjection.from_pretrained(
"h94/IP-Adapter",
subfolder="models/image_encoder",
torch_dtype=torch.float16
)
pipeline = AutoPipelineForText2Image.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
image_encoder=image_encoder,
enable_pag=True,
torch_dtype=torch.float16
).to("cuda")
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.bin")
pag_scales = 5.0
ip_adapter_scales = 0.8
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png")
pipeline.set_ip_adapter_scale(ip_adapter_scale)
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt="a polar bear sitting in a chair drinking a milkshake",
ip_adapter_image=image,
negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
num_inference_steps=25,
guidance_scale=3.0,
generator=generator,
pag_scale=pag_scale,
).images
images[0]
```
PAG reduces artifacts and improves the overall compposition.
<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_ipa_0.8.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
</div>
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_5.0_ipa_0.8.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
</div>
</div>
## Configure parameters
### pag_applied_layers
The `pag_applied_layers` argument allows you to specify which layers PAG is applied to. By default, it applies only to the mid blocks. Changing this setting will significantly impact the output. You can use the `set_pag_applied_layers` method to adjust the PAG layers after the pipeline is created, helping you find the optimal layers for your model.
As an example, here is the images generated with `pag_layers = ["down.block_2"]` and `pag_layers = ["down.block_2", "up.block_1.attentions_0"]`
```py
prompt = "an insect robot preparing a delicious meal, anime style"
pipeline.set_pag_applied_layers(pag_layers)
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
prompt=prompt,
num_inference_steps=25,
guidance_scale=guidance_scale,
generator=generator,
pag_scale=pag_scale,
).images
images[0]
```
<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_down2_up1a0.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">down.block_2 + up.block1.attentions_0</figcaption>
</div>
<div class="flex-1">
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_down2.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">down.block_2</figcaption>
</div>
</div>
...@@ -311,11 +311,15 @@ else: ...@@ -311,11 +311,15 @@ else:
"StableDiffusionXLAdapterPipeline", "StableDiffusionXLAdapterPipeline",
"StableDiffusionXLControlNetImg2ImgPipeline", "StableDiffusionXLControlNetImg2ImgPipeline",
"StableDiffusionXLControlNetInpaintPipeline", "StableDiffusionXLControlNetInpaintPipeline",
"StableDiffusionXLControlNetPAGPipeline",
"StableDiffusionXLControlNetPipeline", "StableDiffusionXLControlNetPipeline",
"StableDiffusionXLControlNetXSPipeline", "StableDiffusionXLControlNetXSPipeline",
"StableDiffusionXLImg2ImgPipeline", "StableDiffusionXLImg2ImgPipeline",
"StableDiffusionXLInpaintPipeline", "StableDiffusionXLInpaintPipeline",
"StableDiffusionXLInstructPix2PixPipeline", "StableDiffusionXLInstructPix2PixPipeline",
"StableDiffusionXLPAGImg2ImgPipeline",
"StableDiffusionXLPAGInpaintPipeline",
"StableDiffusionXLPAGPipeline",
"StableDiffusionXLPipeline", "StableDiffusionXLPipeline",
"StableUnCLIPImg2ImgPipeline", "StableUnCLIPImg2ImgPipeline",
"StableUnCLIPPipeline", "StableUnCLIPPipeline",
...@@ -702,11 +706,15 @@ if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: ...@@ -702,11 +706,15 @@ if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT:
StableDiffusionXLAdapterPipeline, StableDiffusionXLAdapterPipeline,
StableDiffusionXLControlNetImg2ImgPipeline, StableDiffusionXLControlNetImg2ImgPipeline,
StableDiffusionXLControlNetInpaintPipeline, StableDiffusionXLControlNetInpaintPipeline,
StableDiffusionXLControlNetPAGPipeline,
StableDiffusionXLControlNetPipeline, StableDiffusionXLControlNetPipeline,
StableDiffusionXLControlNetXSPipeline, StableDiffusionXLControlNetXSPipeline,
StableDiffusionXLImg2ImgPipeline, StableDiffusionXLImg2ImgPipeline,
StableDiffusionXLInpaintPipeline, StableDiffusionXLInpaintPipeline,
StableDiffusionXLInstructPix2PixPipeline, StableDiffusionXLInstructPix2PixPipeline,
StableDiffusionXLPAGImg2ImgPipeline,
StableDiffusionXLPAGInpaintPipeline,
StableDiffusionXLPAGPipeline,
StableDiffusionXLPipeline, StableDiffusionXLPipeline,
StableUnCLIPImg2ImgPipeline, StableUnCLIPImg2ImgPipeline,
StableUnCLIPPipeline, StableUnCLIPPipeline,
......
...@@ -922,8 +922,6 @@ class UNet2DConditionLoadersMixin: ...@@ -922,8 +922,6 @@ class UNet2DConditionLoadersMixin:
def _convert_ip_adapter_attn_to_diffusers(self, state_dicts, low_cpu_mem_usage=False): def _convert_ip_adapter_attn_to_diffusers(self, state_dicts, low_cpu_mem_usage=False):
from ..models.attention_processor import ( from ..models.attention_processor import (
AttnProcessor,
AttnProcessor2_0,
IPAdapterAttnProcessor, IPAdapterAttnProcessor,
IPAdapterAttnProcessor2_0, IPAdapterAttnProcessor2_0,
) )
...@@ -963,9 +961,7 @@ class UNet2DConditionLoadersMixin: ...@@ -963,9 +961,7 @@ class UNet2DConditionLoadersMixin:
hidden_size = self.config.block_out_channels[block_id] hidden_size = self.config.block_out_channels[block_id]
if cross_attention_dim is None or "motion_modules" in name: if cross_attention_dim is None or "motion_modules" in name:
attn_processor_class = ( attn_processor_class = self.attn_processors[name].__class__
AttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else AttnProcessor
)
attn_procs[name] = attn_processor_class() attn_procs[name] = attn_processor_class()
else: else:
......
...@@ -2561,6 +2561,220 @@ class IPAdapterAttnProcessor2_0(torch.nn.Module): ...@@ -2561,6 +2561,220 @@ class IPAdapterAttnProcessor2_0(torch.nn.Module):
return hidden_states return hidden_states
class PAGIdentitySelfAttnProcessor2_0:
r"""
Processor for implementing PAG using scaled dot-product attention (enabled by default if you're using PyTorch 2.0).
PAG reference: https://arxiv.org/abs/2403.17377
"""
def __init__(self):
if not hasattr(F, "scaled_dot_product_attention"):
raise ImportError(
"PAGIdentitySelfAttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0."
)
def __call__(
self,
attn: Attention,
hidden_states: torch.FloatTensor,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
temb: Optional[torch.FloatTensor] = None,
) -> torch.Tensor:
residual = hidden_states
if attn.spatial_norm is not None:
hidden_states = attn.spatial_norm(hidden_states, temb)
input_ndim = hidden_states.ndim
if input_ndim == 4:
batch_size, channel, height, width = hidden_states.shape
hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
# chunk
hidden_states_org, hidden_states_ptb = hidden_states.chunk(2)
# original path
batch_size, sequence_length, _ = hidden_states_org.shape
if attention_mask is not None:
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
# scaled_dot_product_attention expects attention_mask shape to be
# (batch, heads, source_length, target_length)
attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
if attn.group_norm is not None:
hidden_states_org = attn.group_norm(hidden_states_org.transpose(1, 2)).transpose(1, 2)
query = attn.to_q(hidden_states_org)
key = attn.to_k(hidden_states_org)
value = attn.to_v(hidden_states_org)
inner_dim = key.shape[-1]
head_dim = inner_dim // attn.heads
query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
# the output of sdp = (batch, num_heads, seq_len, head_dim)
# TODO: add support for attn.scale when we move to Torch 2.1
hidden_states_org = F.scaled_dot_product_attention(
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
)
hidden_states_org = hidden_states_org.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
hidden_states_org = hidden_states_org.to(query.dtype)
# linear proj
hidden_states_org = attn.to_out[0](hidden_states_org)
# dropout
hidden_states_org = attn.to_out[1](hidden_states_org)
if input_ndim == 4:
hidden_states_org = hidden_states_org.transpose(-1, -2).reshape(batch_size, channel, height, width)
# perturbed path (identity attention)
batch_size, sequence_length, _ = hidden_states_ptb.shape
if attention_mask is not None:
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
# scaled_dot_product_attention expects attention_mask shape to be
# (batch, heads, source_length, target_length)
attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
if attn.group_norm is not None:
hidden_states_ptb = attn.group_norm(hidden_states_ptb.transpose(1, 2)).transpose(1, 2)
hidden_states_ptb = attn.to_v(hidden_states_ptb)
hidden_states_ptb = hidden_states_ptb.to(query.dtype)
# linear proj
hidden_states_ptb = attn.to_out[0](hidden_states_ptb)
# dropout
hidden_states_ptb = attn.to_out[1](hidden_states_ptb)
if input_ndim == 4:
hidden_states_ptb = hidden_states_ptb.transpose(-1, -2).reshape(batch_size, channel, height, width)
# cat
hidden_states = torch.cat([hidden_states_org, hidden_states_ptb])
if attn.residual_connection:
hidden_states = hidden_states + residual
hidden_states = hidden_states / attn.rescale_output_factor
return hidden_states
class PAGCFGIdentitySelfAttnProcessor2_0:
r"""
Processor for implementing PAG using scaled dot-product attention (enabled by default if you're using PyTorch 2.0).
PAG reference: https://arxiv.org/abs/2403.17377
"""
def __init__(self):
if not hasattr(F, "scaled_dot_product_attention"):
raise ImportError(
"PAGCFGIdentitySelfAttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0."
)
def __call__(
self,
attn: Attention,
hidden_states: torch.FloatTensor,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
temb: Optional[torch.FloatTensor] = None,
) -> torch.Tensor:
residual = hidden_states
if attn.spatial_norm is not None:
hidden_states = attn.spatial_norm(hidden_states, temb)
input_ndim = hidden_states.ndim
if input_ndim == 4:
batch_size, channel, height, width = hidden_states.shape
hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
# chunk
hidden_states_uncond, hidden_states_org, hidden_states_ptb = hidden_states.chunk(3)
hidden_states_org = torch.cat([hidden_states_uncond, hidden_states_org])
# original path
batch_size, sequence_length, _ = hidden_states_org.shape
if attention_mask is not None:
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
# scaled_dot_product_attention expects attention_mask shape to be
# (batch, heads, source_length, target_length)
attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
if attn.group_norm is not None:
hidden_states_org = attn.group_norm(hidden_states_org.transpose(1, 2)).transpose(1, 2)
query = attn.to_q(hidden_states_org)
key = attn.to_k(hidden_states_org)
value = attn.to_v(hidden_states_org)
inner_dim = key.shape[-1]
head_dim = inner_dim // attn.heads
query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
# the output of sdp = (batch, num_heads, seq_len, head_dim)
# TODO: add support for attn.scale when we move to Torch 2.1
hidden_states_org = F.scaled_dot_product_attention(
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
)
hidden_states_org = hidden_states_org.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
hidden_states_org = hidden_states_org.to(query.dtype)
# linear proj
hidden_states_org = attn.to_out[0](hidden_states_org)
# dropout
hidden_states_org = attn.to_out[1](hidden_states_org)
if input_ndim == 4:
hidden_states_org = hidden_states_org.transpose(-1, -2).reshape(batch_size, channel, height, width)
# perturbed path (identity attention)
batch_size, sequence_length, _ = hidden_states_ptb.shape
if attention_mask is not None:
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
# scaled_dot_product_attention expects attention_mask shape to be
# (batch, heads, source_length, target_length)
attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
if attn.group_norm is not None:
hidden_states_ptb = attn.group_norm(hidden_states_ptb.transpose(1, 2)).transpose(1, 2)
value = attn.to_v(hidden_states_ptb)
hidden_states_ptb = value
hidden_states_ptb = hidden_states_ptb.to(query.dtype)
# linear proj
hidden_states_ptb = attn.to_out[0](hidden_states_ptb)
# dropout
hidden_states_ptb = attn.to_out[1](hidden_states_ptb)
if input_ndim == 4:
hidden_states_ptb = hidden_states_ptb.transpose(-1, -2).reshape(batch_size, channel, height, width)
# cat
hidden_states = torch.cat([hidden_states_org, hidden_states_ptb])
if attn.residual_connection:
hidden_states = hidden_states + residual
hidden_states = hidden_states / attn.rescale_output_factor
return hidden_states
ADDED_KV_ATTENTION_PROCESSORS = ( ADDED_KV_ATTENTION_PROCESSORS = (
AttnAddedKVProcessor, AttnAddedKVProcessor,
SlicedAttnAddedKVProcessor, SlicedAttnAddedKVProcessor,
...@@ -2590,4 +2804,6 @@ AttentionProcessor = Union[ ...@@ -2590,4 +2804,6 @@ AttentionProcessor = Union[
CustomDiffusionAttnProcessor, CustomDiffusionAttnProcessor,
CustomDiffusionXFormersAttnProcessor, CustomDiffusionXFormersAttnProcessor,
CustomDiffusionAttnProcessor2_0, CustomDiffusionAttnProcessor2_0,
PAGCFGIdentitySelfAttnProcessor2_0,
PAGIdentitySelfAttnProcessor2_0,
] ]
...@@ -26,6 +26,7 @@ _import_structure = { ...@@ -26,6 +26,7 @@ _import_structure = {
"latent_diffusion": [], "latent_diffusion": [],
"ledits_pp": [], "ledits_pp": [],
"marigold": [], "marigold": [],
"pag": [],
"stable_diffusion": [], "stable_diffusion": [],
"stable_diffusion_xl": [], "stable_diffusion_xl": [],
} }
...@@ -137,6 +138,14 @@ else: ...@@ -137,6 +138,14 @@ else:
"StableDiffusionXLControlNetPipeline", "StableDiffusionXLControlNetPipeline",
] ]
) )
_import_structure["pag"].extend(
[
"StableDiffusionXLPAGPipeline",
"StableDiffusionXLPAGInpaintPipeline",
"StableDiffusionXLControlNetPAGPipeline",
"StableDiffusionXLPAGImg2ImgPipeline",
]
)
_import_structure["controlnet_xs"].extend( _import_structure["controlnet_xs"].extend(
[ [
"StableDiffusionControlNetXSPipeline", "StableDiffusionControlNetXSPipeline",
...@@ -472,6 +481,12 @@ if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: ...@@ -472,6 +481,12 @@ if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT:
MarigoldNormalsPipeline, MarigoldNormalsPipeline,
) )
from .musicldm import MusicLDMPipeline from .musicldm import MusicLDMPipeline
from .pag import (
StableDiffusionXLControlNetPAGPipeline,
StableDiffusionXLPAGImg2ImgPipeline,
StableDiffusionXLPAGInpaintPipeline,
StableDiffusionXLPAGPipeline,
)
from .paint_by_example import PaintByExamplePipeline from .paint_by_example import PaintByExamplePipeline
from .pia import PIAPipeline from .pia import PIAPipeline
from .pixart_alpha import PixArtAlphaPipeline, PixArtSigmaPipeline from .pixart_alpha import PixArtAlphaPipeline, PixArtSigmaPipeline
......
...@@ -352,6 +352,9 @@ class AnimateDiffPipeline( ...@@ -352,6 +352,9 @@ class AnimateDiffPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -361,7 +364,6 @@ class AnimateDiffPipeline( ...@@ -361,7 +364,6 @@ class AnimateDiffPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -369,36 +371,28 @@ class AnimateDiffPipeline( ...@@ -369,36 +371,28 @@ class AnimateDiffPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents # Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
def decode_latents(self, latents): def decode_latents(self, latents):
......
...@@ -564,6 +564,9 @@ class AnimateDiffSDXLPipeline( ...@@ -564,6 +564,9 @@ class AnimateDiffSDXLPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -573,7 +576,6 @@ class AnimateDiffSDXLPipeline( ...@@ -573,7 +576,6 @@ class AnimateDiffSDXLPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -581,36 +583,28 @@ class AnimateDiffSDXLPipeline( ...@@ -581,36 +583,28 @@ class AnimateDiffSDXLPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents # Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
def decode_latents(self, latents): def decode_latents(self, latents):
......
...@@ -456,6 +456,9 @@ class AnimateDiffVideoToVideoPipeline( ...@@ -456,6 +456,9 @@ class AnimateDiffVideoToVideoPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -465,7 +468,6 @@ class AnimateDiffVideoToVideoPipeline( ...@@ -465,7 +468,6 @@ class AnimateDiffVideoToVideoPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -473,36 +475,28 @@ class AnimateDiffVideoToVideoPipeline( ...@@ -473,36 +475,28 @@ class AnimateDiffVideoToVideoPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents # Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
def decode_latents(self, latents): def decode_latents(self, latents):
......
...@@ -46,6 +46,12 @@ from .kandinsky2_2 import ( ...@@ -46,6 +46,12 @@ from .kandinsky2_2 import (
) )
from .kandinsky3 import Kandinsky3Img2ImgPipeline, Kandinsky3Pipeline from .kandinsky3 import Kandinsky3Img2ImgPipeline, Kandinsky3Pipeline
from .latent_consistency_models import LatentConsistencyModelImg2ImgPipeline, LatentConsistencyModelPipeline from .latent_consistency_models import LatentConsistencyModelImg2ImgPipeline, LatentConsistencyModelPipeline
from .pag import (
StableDiffusionXLControlNetPAGPipeline,
StableDiffusionXLPAGImg2ImgPipeline,
StableDiffusionXLPAGInpaintPipeline,
StableDiffusionXLPAGPipeline,
)
from .pixart_alpha import PixArtAlphaPipeline, PixArtSigmaPipeline from .pixart_alpha import PixArtAlphaPipeline, PixArtSigmaPipeline
from .stable_cascade import StableCascadeCombinedPipeline, StableCascadeDecoderPipeline from .stable_cascade import StableCascadeCombinedPipeline, StableCascadeDecoderPipeline
from .stable_diffusion import ( from .stable_diffusion import (
...@@ -82,6 +88,8 @@ AUTO_TEXT2IMAGE_PIPELINES_MAPPING = OrderedDict( ...@@ -82,6 +88,8 @@ AUTO_TEXT2IMAGE_PIPELINES_MAPPING = OrderedDict(
("lcm", LatentConsistencyModelPipeline), ("lcm", LatentConsistencyModelPipeline),
("pixart-alpha", PixArtAlphaPipeline), ("pixart-alpha", PixArtAlphaPipeline),
("pixart-sigma", PixArtSigmaPipeline), ("pixart-sigma", PixArtSigmaPipeline),
("stable-diffusion-xl-pag", StableDiffusionXLPAGPipeline),
("stable-diffusion-xl-controlnet-pag", StableDiffusionXLControlNetPAGPipeline),
] ]
) )
...@@ -96,6 +104,7 @@ AUTO_IMAGE2IMAGE_PIPELINES_MAPPING = OrderedDict( ...@@ -96,6 +104,7 @@ AUTO_IMAGE2IMAGE_PIPELINES_MAPPING = OrderedDict(
("kandinsky3", Kandinsky3Img2ImgPipeline), ("kandinsky3", Kandinsky3Img2ImgPipeline),
("stable-diffusion-controlnet", StableDiffusionControlNetImg2ImgPipeline), ("stable-diffusion-controlnet", StableDiffusionControlNetImg2ImgPipeline),
("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetImg2ImgPipeline), ("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetImg2ImgPipeline),
("stable-diffusion-xl-pag", StableDiffusionXLPAGImg2ImgPipeline),
("lcm", LatentConsistencyModelImg2ImgPipeline), ("lcm", LatentConsistencyModelImg2ImgPipeline),
] ]
) )
...@@ -109,6 +118,7 @@ AUTO_INPAINT_PIPELINES_MAPPING = OrderedDict( ...@@ -109,6 +118,7 @@ AUTO_INPAINT_PIPELINES_MAPPING = OrderedDict(
("kandinsky22", KandinskyV22InpaintCombinedPipeline), ("kandinsky22", KandinskyV22InpaintCombinedPipeline),
("stable-diffusion-controlnet", StableDiffusionControlNetInpaintPipeline), ("stable-diffusion-controlnet", StableDiffusionControlNetInpaintPipeline),
("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetInpaintPipeline), ("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetInpaintPipeline),
("stable-diffusion-xl-pag", StableDiffusionXLPAGInpaintPipeline),
] ]
) )
...@@ -340,6 +350,10 @@ class AutoPipelineForText2Image(ConfigMixin): ...@@ -340,6 +350,10 @@ class AutoPipelineForText2Image(ConfigMixin):
if "controlnet" in kwargs: if "controlnet" in kwargs:
orig_class_name = config["_class_name"].replace("Pipeline", "ControlNetPipeline") orig_class_name = config["_class_name"].replace("Pipeline", "ControlNetPipeline")
if "enable_pag" in kwargs:
enable_pag = kwargs.pop("enable_pag")
if enable_pag:
orig_class_name = orig_class_name.replace("Pipeline", "PAGPipeline")
text_2_image_cls = _get_task_class(AUTO_TEXT2IMAGE_PIPELINES_MAPPING, orig_class_name) text_2_image_cls = _get_task_class(AUTO_TEXT2IMAGE_PIPELINES_MAPPING, orig_class_name)
...@@ -383,14 +397,28 @@ class AutoPipelineForText2Image(ConfigMixin): ...@@ -383,14 +397,28 @@ class AutoPipelineForText2Image(ConfigMixin):
if "controlnet" in kwargs: if "controlnet" in kwargs:
if kwargs["controlnet"] is not None: if kwargs["controlnet"] is not None:
to_replace = "PAGPipeline" if "PAG" in text_2_image_cls.__name__ else "Pipeline"
text_2_image_cls = _get_task_class( text_2_image_cls = _get_task_class(
AUTO_TEXT2IMAGE_PIPELINES_MAPPING, AUTO_TEXT2IMAGE_PIPELINES_MAPPING,
text_2_image_cls.__name__.replace("ControlNet", "").replace("Pipeline", "ControlNetPipeline"), text_2_image_cls.__name__.replace("ControlNet", "").replace(to_replace, "ControlNet" + to_replace),
) )
else: else:
text_2_image_cls = _get_task_class( text_2_image_cls = _get_task_class(
AUTO_TEXT2IMAGE_PIPELINES_MAPPING, AUTO_TEXT2IMAGE_PIPELINES_MAPPING,
text_2_image_cls.__name__.replace("ControlNetPipeline", "Pipeline"), text_2_image_cls.__name__.replace("ControlNet", ""),
)
if "enable_pag" in kwargs:
enable_pag = kwargs.pop("enable_pag")
if enable_pag:
text_2_image_cls = _get_task_class(
AUTO_TEXT2IMAGE_PIPELINES_MAPPING,
text_2_image_cls.__name__.replace("PAG", "").replace("Pipeline", "PAGPipeline"),
)
else:
text_2_image_cls = _get_task_class(
AUTO_TEXT2IMAGE_PIPELINES_MAPPING,
text_2_image_cls.__name__.replace("PAG", ""),
) )
# define expected module and optional kwargs given the pipeline signature # define expected module and optional kwargs given the pipeline signature
...@@ -613,6 +641,10 @@ class AutoPipelineForImage2Image(ConfigMixin): ...@@ -613,6 +641,10 @@ class AutoPipelineForImage2Image(ConfigMixin):
if "controlnet" in kwargs: if "controlnet" in kwargs:
orig_class_name = config["_class_name"].replace("Pipeline", "ControlNetPipeline") orig_class_name = config["_class_name"].replace("Pipeline", "ControlNetPipeline")
if "enable_pag" in kwargs:
enable_pag = kwargs.pop("enable_pag")
if enable_pag:
orig_class_name = orig_class_name.replace("Pipeline", "PAGPipeline")
image_2_image_cls = _get_task_class(AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, orig_class_name) image_2_image_cls = _get_task_class(AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, orig_class_name)
...@@ -658,16 +690,32 @@ class AutoPipelineForImage2Image(ConfigMixin): ...@@ -658,16 +690,32 @@ class AutoPipelineForImage2Image(ConfigMixin):
if "controlnet" in kwargs: if "controlnet" in kwargs:
if kwargs["controlnet"] is not None: if kwargs["controlnet"] is not None:
to_replace = "Img2ImgPipeline"
if "PAG" in image_2_image_cls.__name__:
to_replace = "PAG" + to_replace
image_2_image_cls = _get_task_class( image_2_image_cls = _get_task_class(
AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING,
image_2_image_cls.__name__.replace("ControlNet", "").replace( image_2_image_cls.__name__.replace("ControlNet", "").replace(
"Img2ImgPipeline", "ControlNetImg2ImgPipeline" to_replace, "ControlNet" + to_replace
), ),
) )
else: else:
image_2_image_cls = _get_task_class( image_2_image_cls = _get_task_class(
AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING,
image_2_image_cls.__name__.replace("ControlNetImg2ImgPipeline", "Img2ImgPipeline"), image_2_image_cls.__name__.replace("ControlNet", ""),
)
if "enable_pag" in kwargs:
enable_pag = kwargs.pop("enable_pag")
if enable_pag:
image_2_image_cls = _get_task_class(
AUTO_IMAGE2IMAGE_PIPELINES_MAPPING,
image_2_image_cls.__name__.replace("PAG", "").replace("Img2ImgPipeline", "PAGImg2ImgPipeline"),
)
else:
image_2_image_cls = _get_task_class(
AUTO_IMAGE2IMAGE_PIPELINES_MAPPING,
image_2_image_cls.__name__.replace("PAG", ""),
) )
# define expected module and optional kwargs given the pipeline signature # define expected module and optional kwargs given the pipeline signature
...@@ -889,6 +937,10 @@ class AutoPipelineForInpainting(ConfigMixin): ...@@ -889,6 +937,10 @@ class AutoPipelineForInpainting(ConfigMixin):
if "controlnet" in kwargs: if "controlnet" in kwargs:
orig_class_name = config["_class_name"].replace("Pipeline", "ControlNetPipeline") orig_class_name = config["_class_name"].replace("Pipeline", "ControlNetPipeline")
if "enable_pag" in kwargs:
enable_pag = kwargs.pop("enable_pag")
if enable_pag:
orig_class_name = config["_class_name"].replace("Pipeline", "PAGPipeline")
inpainting_cls = _get_task_class(AUTO_INPAINT_PIPELINES_MAPPING, orig_class_name) inpainting_cls = _get_task_class(AUTO_INPAINT_PIPELINES_MAPPING, orig_class_name)
...@@ -945,6 +997,19 @@ class AutoPipelineForInpainting(ConfigMixin): ...@@ -945,6 +997,19 @@ class AutoPipelineForInpainting(ConfigMixin):
inpainting_cls.__name__.replace("ControlNetInpaintPipeline", "InpaintPipeline"), inpainting_cls.__name__.replace("ControlNetInpaintPipeline", "InpaintPipeline"),
) )
if "enable_pag" in kwargs:
enable_pag = kwargs.pop("enable_pag")
if enable_pag:
inpainting_cls = _get_task_class(
AUTO_INPAINT_PIPELINES_MAPPING,
inpainting_cls.__name__.replace("PAG", "").replace("InpaintPipeline", "PAGInpaintPipeline"),
)
else:
inpainting_cls = _get_task_class(
AUTO_INPAINT_PIPELINES_MAPPING,
inpainting_cls.__name__.replace("PAGInpaintPipeline", "InpaintPipeline"),
)
# define expected module and optional kwargs given the pipeline signature # define expected module and optional kwargs given the pipeline signature
expected_modules, optional_kwargs = inpainting_cls._get_signature_keys(inpainting_cls) expected_modules, optional_kwargs = inpainting_cls._get_signature_keys(inpainting_cls)
......
...@@ -499,6 +499,9 @@ class StableDiffusionControlNetPipeline( ...@@ -499,6 +499,9 @@ class StableDiffusionControlNetPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -508,7 +511,6 @@ class StableDiffusionControlNetPipeline( ...@@ -508,7 +511,6 @@ class StableDiffusionControlNetPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -516,36 +518,28 @@ class StableDiffusionControlNetPipeline( ...@@ -516,36 +518,28 @@ class StableDiffusionControlNetPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
def run_safety_checker(self, image, device, dtype): def run_safety_checker(self, image, device, dtype):
......
...@@ -477,6 +477,9 @@ class StableDiffusionControlNetImg2ImgPipeline( ...@@ -477,6 +477,9 @@ class StableDiffusionControlNetImg2ImgPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -486,7 +489,6 @@ class StableDiffusionControlNetImg2ImgPipeline( ...@@ -486,7 +489,6 @@ class StableDiffusionControlNetImg2ImgPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -494,36 +496,28 @@ class StableDiffusionControlNetImg2ImgPipeline( ...@@ -494,36 +496,28 @@ class StableDiffusionControlNetImg2ImgPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
def run_safety_checker(self, image, device, dtype): def run_safety_checker(self, image, device, dtype):
......
...@@ -479,6 +479,9 @@ class StableDiffusionControlNetInpaintPipeline( ...@@ -479,6 +479,9 @@ class StableDiffusionControlNetInpaintPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -488,7 +491,6 @@ class StableDiffusionControlNetInpaintPipeline( ...@@ -488,7 +491,6 @@ class StableDiffusionControlNetInpaintPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -496,36 +498,28 @@ class StableDiffusionControlNetInpaintPipeline( ...@@ -496,36 +498,28 @@ class StableDiffusionControlNetInpaintPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
def run_safety_checker(self, image, device, dtype): def run_safety_checker(self, image, device, dtype):
......
...@@ -533,6 +533,9 @@ class StableDiffusionXLControlNetInpaintPipeline( ...@@ -533,6 +533,9 @@ class StableDiffusionXLControlNetInpaintPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -542,7 +545,6 @@ class StableDiffusionXLControlNetInpaintPipeline( ...@@ -542,7 +545,6 @@ class StableDiffusionXLControlNetInpaintPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -550,36 +552,28 @@ class StableDiffusionXLControlNetInpaintPipeline( ...@@ -550,36 +552,28 @@ class StableDiffusionXLControlNetInpaintPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
def prepare_extra_step_kwargs(self, generator, eta): def prepare_extra_step_kwargs(self, generator, eta):
......
...@@ -554,6 +554,9 @@ class StableDiffusionXLControlNetPipeline( ...@@ -554,6 +554,9 @@ class StableDiffusionXLControlNetPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -563,7 +566,6 @@ class StableDiffusionXLControlNetPipeline( ...@@ -563,7 +566,6 @@ class StableDiffusionXLControlNetPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -571,36 +573,28 @@ class StableDiffusionXLControlNetPipeline( ...@@ -571,36 +573,28 @@ class StableDiffusionXLControlNetPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
def prepare_extra_step_kwargs(self, generator, eta): def prepare_extra_step_kwargs(self, generator, eta):
......
...@@ -548,6 +548,9 @@ class StableDiffusionXLControlNetImg2ImgPipeline( ...@@ -548,6 +548,9 @@ class StableDiffusionXLControlNetImg2ImgPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -557,7 +560,6 @@ class StableDiffusionXLControlNetImg2ImgPipeline( ...@@ -557,7 +560,6 @@ class StableDiffusionXLControlNetImg2ImgPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -565,36 +567,28 @@ class StableDiffusionXLControlNetImg2ImgPipeline( ...@@ -565,36 +567,28 @@ class StableDiffusionXLControlNetImg2ImgPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
def prepare_extra_step_kwargs(self, generator, eta): def prepare_extra_step_kwargs(self, generator, eta):
......
...@@ -441,6 +441,9 @@ class LatentConsistencyModelImg2ImgPipeline( ...@@ -441,6 +441,9 @@ class LatentConsistencyModelImg2ImgPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -450,7 +453,6 @@ class LatentConsistencyModelImg2ImgPipeline( ...@@ -450,7 +453,6 @@ class LatentConsistencyModelImg2ImgPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -458,36 +460,28 @@ class LatentConsistencyModelImg2ImgPipeline( ...@@ -458,36 +460,28 @@ class LatentConsistencyModelImg2ImgPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
def run_safety_checker(self, image, device, dtype): def run_safety_checker(self, image, device, dtype):
......
...@@ -425,6 +425,9 @@ class LatentConsistencyModelPipeline( ...@@ -425,6 +425,9 @@ class LatentConsistencyModelPipeline(
def prepare_ip_adapter_image_embeds( def prepare_ip_adapter_image_embeds(
self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance
): ):
image_embeds = []
if do_classifier_free_guidance:
negative_image_embeds = []
if ip_adapter_image_embeds is None: if ip_adapter_image_embeds is None:
if not isinstance(ip_adapter_image, list): if not isinstance(ip_adapter_image, list):
ip_adapter_image = [ip_adapter_image] ip_adapter_image = [ip_adapter_image]
...@@ -434,7 +437,6 @@ class LatentConsistencyModelPipeline( ...@@ -434,7 +437,6 @@ class LatentConsistencyModelPipeline(
f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
) )
image_embeds = []
for single_ip_adapter_image, image_proj_layer in zip( for single_ip_adapter_image, image_proj_layer in zip(
ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers
): ):
...@@ -442,36 +444,28 @@ class LatentConsistencyModelPipeline( ...@@ -442,36 +444,28 @@ class LatentConsistencyModelPipeline(
single_image_embeds, single_negative_image_embeds = self.encode_image( single_image_embeds, single_negative_image_embeds = self.encode_image(
single_ip_adapter_image, device, 1, output_hidden_state single_ip_adapter_image, device, 1, output_hidden_state
) )
single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
single_negative_image_embeds = torch.stack(
[single_negative_image_embeds] * num_images_per_prompt, dim=0
)
image_embeds.append(single_image_embeds[None, :])
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) negative_image_embeds.append(single_negative_image_embeds[None, :])
single_image_embeds = single_image_embeds.to(device)
image_embeds.append(single_image_embeds)
else: else:
repeat_dims = [1]
image_embeds = []
for single_image_embeds in ip_adapter_image_embeds: for single_image_embeds in ip_adapter_image_embeds:
if do_classifier_free_guidance: if do_classifier_free_guidance:
single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2)
single_image_embeds = single_image_embeds.repeat( negative_image_embeds.append(single_negative_image_embeds)
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
single_negative_image_embeds = single_negative_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:]))
)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
else:
single_image_embeds = single_image_embeds.repeat(
num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:]))
)
image_embeds.append(single_image_embeds) image_embeds.append(single_image_embeds)
return image_embeds ip_adapter_image_embeds = []
for i, single_image_embeds in enumerate(image_embeds):
single_image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
if do_classifier_free_guidance:
single_negative_image_embeds = torch.cat([negative_image_embeds[i]] * num_images_per_prompt, dim=0)
single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds], dim=0)
single_image_embeds = single_image_embeds.to(device=device)
ip_adapter_image_embeds.append(single_image_embeds)
return ip_adapter_image_embeds
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
def run_safety_checker(self, image, device, dtype): def run_safety_checker(self, image, device, dtype):
......
from typing import TYPE_CHECKING
from ...utils import (
DIFFUSERS_SLOW_IMPORT,
OptionalDependencyNotAvailable,
_LazyModule,
get_objects_from_module,
is_flax_available,
is_torch_available,
is_transformers_available,
)
_dummy_objects = {}
_import_structure = {}
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
from ...utils import dummy_torch_and_transformers_objects # noqa F403
_dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
else:
_import_structure["pipeline_pag_controlnet_sd_xl"] = ["StableDiffusionXLControlNetPAGPipeline"]
_import_structure["pipeline_pag_sd_xl"] = ["StableDiffusionXLPAGPipeline"]
_import_structure["pipeline_pag_sd_xl_img2img"] = ["StableDiffusionXLPAGImg2ImgPipeline"]
_import_structure["pipeline_pag_sd_xl_inpaint"] = ["StableDiffusionXLPAGInpaintPipeline"]
if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT:
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
from ...utils.dummy_torch_and_transformers_objects import *
else:
from .pipeline_pag_controlnet_sd_xl import StableDiffusionXLControlNetPAGPipeline
from .pipeline_pag_sd_xl import StableDiffusionXLPAGPipeline
from .pipeline_pag_sd_xl_img2img import StableDiffusionXLPAGImg2ImgPipeline
from .pipeline_pag_sd_xl_inpaint import StableDiffusionXLPAGInpaintPipeline
else:
import sys
sys.modules[__name__] = _LazyModule(
__name__,
globals()["__file__"],
_import_structure,
module_spec=__spec__,
)
for name, value in _dummy_objects.items():
setattr(sys.modules[__name__], name, value)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment