"vscode:/vscode.git/clone" did not exist on "9fdb7dabe120b52a7681140ec63c0f55df7b10fc"
README_community_scripts.md 20.6 KB
Newer Older
1
2
# Community Scripts

3
**Community scripts** consist of inference examples using Diffusers pipelines that have been added by the community.
4
5
6
7
8
Please have a look at the following table to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste code example that you can try out.
If a community script doesn't work as expected, please open an issue and ping the author on it.

| Example                                                                                                                               | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | Code Example                                                                              | Colab                                                                                                                                                                                                              |                                                        Author |
|:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------:|
9
10
11
| Using IP-Adapter with Negative Noise                                                                                                  | Using negative noise with IP-adapter to better control the generation (see the [original post](https://github.com/huggingface/diffusers/discussions/7167) on the forum for more details)                                                                                                                                                                                                                                                    | [IP-Adapter Negative Noise](#ip-adapter-negative-noise)                                   |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ip_adapter_negative_noise.ipynb) | [Álvaro Somoza](https://github.com/asomoza)|
| Asymmetric Tiling                                                                                                  |configure seamless image tiling independently for the X and Y axes                                                                                                                                                                                                      | [Asymmetric Tiling](#Asymmetric-Tiling )                                   |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/asymetric_tiling.ipynb) | [alexisrolland](https://github.com/alexisrolland)|
| Prompt Scheduling Callback                                                                                                  |Allows changing prompts during a generation                                                                                                                                                                                                      | [Prompt Scheduling-Callback](#Prompt-Scheduling-Callback )                                   |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/prompt_scheduling_callback.ipynb) | [hlky](https://github.com/hlky)|
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212


## Example usages

### IP Adapter Negative Noise

Diffusers pipelines are fully integrated with IP-Adapter, which allows you to prompt the diffusion model with an image. However, it does not support negative image prompts (there is no `negative_ip_adapter_image` argument) the same way it supports negative text prompts. When you pass an `ip_adapter_image,` it will create a zero-filled tensor as a negative image. This script shows you how to create a negative noise from `ip_adapter_image` and use it to significantly improve the generation quality while preserving the composition of images.

[cubiq](https://github.com/cubiq) initially developed this feature in his [repository](https://github.com/cubiq/ComfyUI_IPAdapter_plus). The community script was contributed by [asomoza](https://github.com/Somoza). You can find more details about this experimentation [this discussion](https://github.com/huggingface/diffusers/discussions/7167)

IP-Adapter without negative noise
|source|result|
|---|---|
|![20240229150812](https://github.com/huggingface/diffusers/assets/5442875/901d8bd8-7a59-4fe7-bda1-a0e0d6c7dffd)|![20240229163923_normal](https://github.com/huggingface/diffusers/assets/5442875/3432e25a-ece6-45f4-a3f4-fca354f40b5b)|

IP-Adapter with negative noise
|source|result|
|---|---|
|![20240229150812](https://github.com/huggingface/diffusers/assets/5442875/901d8bd8-7a59-4fe7-bda1-a0e0d6c7dffd)|![20240229163923](https://github.com/huggingface/diffusers/assets/5442875/736fd15a-36ba-40c0-a7d8-6ec1ac26f788)|

```python
import torch

from diffusers import AutoencoderKL, DPMSolverMultistepScheduler, StableDiffusionXLPipeline
from diffusers.models import ImageProjection
from diffusers.utils import load_image


def encode_image(
    image_encoder,
    feature_extractor,
    image,
    device,
    num_images_per_prompt,
    output_hidden_states=None,
    negative_image=None,
):
    dtype = next(image_encoder.parameters()).dtype

    if not isinstance(image, torch.Tensor):
        image = feature_extractor(image, return_tensors="pt").pixel_values

    image = image.to(device=device, dtype=dtype)
    if output_hidden_states:
        image_enc_hidden_states = image_encoder(image, output_hidden_states=True).hidden_states[-2]
        image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)

        if negative_image is None:
            uncond_image_enc_hidden_states = image_encoder(
                torch.zeros_like(image), output_hidden_states=True
            ).hidden_states[-2]
        else:
            if not isinstance(negative_image, torch.Tensor):
                negative_image = feature_extractor(negative_image, return_tensors="pt").pixel_values
            negative_image = negative_image.to(device=device, dtype=dtype)
            uncond_image_enc_hidden_states = image_encoder(negative_image, output_hidden_states=True).hidden_states[-2]

        uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
        return image_enc_hidden_states, uncond_image_enc_hidden_states
    else:
        image_embeds = image_encoder(image).image_embeds
        image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0)
        uncond_image_embeds = torch.zeros_like(image_embeds)

        return image_embeds, uncond_image_embeds


@torch.no_grad()
def prepare_ip_adapter_image_embeds(
    unet,
    image_encoder,
    feature_extractor,
    ip_adapter_image,
    do_classifier_free_guidance,
    device,
    num_images_per_prompt,
    ip_adapter_negative_image=None,
):
    if not isinstance(ip_adapter_image, list):
        ip_adapter_image = [ip_adapter_image]

    if len(ip_adapter_image) != len(unet.encoder_hid_proj.image_projection_layers):
        raise ValueError(
            f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(unet.encoder_hid_proj.image_projection_layers)} IP Adapters."
        )

    image_embeds = []
    for single_ip_adapter_image, image_proj_layer in zip(
        ip_adapter_image, unet.encoder_hid_proj.image_projection_layers
    ):
        output_hidden_state = not isinstance(image_proj_layer, ImageProjection)
        single_image_embeds, single_negative_image_embeds = encode_image(
            image_encoder,
            feature_extractor,
            single_ip_adapter_image,
            device,
            1,
            output_hidden_state,
            negative_image=ip_adapter_negative_image,
        )
        single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0)
        single_negative_image_embeds = torch.stack([single_negative_image_embeds] * num_images_per_prompt, dim=0)

        if do_classifier_free_guidance:
            single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds])
            single_image_embeds = single_image_embeds.to(device)

        image_embeds.append(single_image_embeds)

    return image_embeds


vae = AutoencoderKL.from_pretrained(
    "madebyollin/sdxl-vae-fp16-fix",
    torch_dtype=torch.float16,
).to("cuda")

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "RunDiffusion/Juggernaut-XL-v9",
    torch_dtype=torch.float16,
    vae=vae,
    variant="fp16",
).to("cuda")

pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
pipeline.scheduler.config.use_karras_sigmas = True

pipeline.load_ip_adapter(
    "h94/IP-Adapter",
    subfolder="sdxl_models",
    weight_name="ip-adapter-plus_sdxl_vit-h.safetensors",
    image_encoder_folder="models/image_encoder",
)
pipeline.set_ip_adapter_scale(0.7)

ip_image = load_image("source.png")
negative_ip_image = load_image("noise.png")

image_embeds = prepare_ip_adapter_image_embeds(
    unet=pipeline.unet,
    image_encoder=pipeline.image_encoder,
    feature_extractor=pipeline.feature_extractor,
    ip_adapter_image=[[ip_image]],
    do_classifier_free_guidance=True,
    device="cuda",
    num_images_per_prompt=1,
    ip_adapter_negative_image=negative_ip_image,
)


prompt = "cinematic photo of a cyborg in the city, 4k, high quality, intricate, highly detailed"
negative_prompt = "blurry, smooth, plastic"

image = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    ip_adapter_image_embeds=image_embeds,
    guidance_scale=6.0,
    num_inference_steps=25,
    generator=torch.Generator(device="cpu").manual_seed(1556265306),
).images[0]

image.save("result.png")
```

### Asymmetric Tiling
Stable Diffusion is not trained to generate seamless textures. However, you can use this simple script to add tiling to your generation. This script is contributed by [alexisrolland](https://github.com/alexisrolland). See more details in the [this issue](https://github.com/huggingface/diffusers/issues/556)


|Generated|Tiled|
|---|---|
|![20240313003235_573631814](https://github.com/huggingface/diffusers/assets/5442875/eca174fb-06a4-464e-a3a7-00dbb024543e)|![wall](https://github.com/huggingface/diffusers/assets/5442875/b4aa774b-2a6a-4316-a8eb-8f30b5f4d024)|


```py
import torch
from typing import Optional
from diffusers import StableDiffusionPipeline
from diffusers.models.lora import LoRACompatibleConv

def seamless_tiling(pipeline, x_axis, y_axis):
    def asymmetric_conv2d_convforward(self, input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None):
        self.paddingX = (self._reversed_padding_repeated_twice[0], self._reversed_padding_repeated_twice[1], 0, 0)
        self.paddingY = (0, 0, self._reversed_padding_repeated_twice[2], self._reversed_padding_repeated_twice[3])
        working = torch.nn.functional.pad(input, self.paddingX, mode=x_mode)
        working = torch.nn.functional.pad(working, self.paddingY, mode=y_mode)
        return torch.nn.functional.conv2d(working, weight, bias, self.stride, torch.nn.modules.utils._pair(0), self.dilation, self.groups)
    x_mode = 'circular' if x_axis else 'constant'
    y_mode = 'circular' if y_axis else 'constant'
    targets = [pipeline.vae, pipeline.text_encoder, pipeline.unet]
    convolution_layers = []
    for target in targets:
        for module in target.modules():
            if isinstance(module, torch.nn.Conv2d):
                convolution_layers.append(module)
    for layer in convolution_layers:
        if isinstance(layer, LoRACompatibleConv) and layer.lora_layer is None:
            layer.lora_layer = lambda * x: 0
        layer._conv_forward = asymmetric_conv2d_convforward.__get__(layer, torch.nn.Conv2d)
    return pipeline

213
pipeline = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True)
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
pipeline.enable_model_cpu_offload()
prompt = ["texture of a red brick wall"]
seed = 123456
generator = torch.Generator(device='cuda').manual_seed(seed)

pipeline = seamless_tiling(pipeline=pipeline, x_axis=True, y_axis=True)
image = pipeline(
    prompt=prompt,
    width=512,
    height=512,
    num_inference_steps=20,
    guidance_scale=7,
    num_images_per_prompt=1,
    generator=generator
).images[0]
seamless_tiling(pipeline=pipeline, x_axis=False, y_axis=False)

torch.cuda.empty_cache()
image.save('image.png')
233
234
235
236
237
238
239
240
241
242
243
```

### Prompt Scheduling callback

Prompt scheduling callback allows changing prompts during a generation, like [prompt editing in A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing)

```python
from diffusers import StableDiffusionPipeline
from diffusers.callbacks import PipelineCallback, MultiPipelineCallbacks
from diffusers.configuration_utils import register_to_config
import torch
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
from typing import Any, Dict, Tuple, Union


class SDPromptSchedulingCallback(PipelineCallback):
    @register_to_config
    def __init__(
        self,
        encoded_prompt: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
        cutoff_step_ratio=None,
        cutoff_step_index=None,
    ):
        super().__init__(
            cutoff_step_ratio=cutoff_step_ratio, cutoff_step_index=cutoff_step_index
        )

    tensor_inputs = ["prompt_embeds"]

    def callback_fn(
        self, pipeline, step_index, timestep, callback_kwargs
    ) -> Dict[str, Any]:
        cutoff_step_ratio = self.config.cutoff_step_ratio
        cutoff_step_index = self.config.cutoff_step_index
        if isinstance(self.config.encoded_prompt, tuple):
            prompt_embeds, negative_prompt_embeds = self.config.encoded_prompt
        else:
            prompt_embeds = self.config.encoded_prompt

        # Use cutoff_step_index if it's not None, otherwise use cutoff_step_ratio
        cutoff_step = (
            cutoff_step_index
            if cutoff_step_index is not None
            else int(pipeline.num_timesteps * cutoff_step_ratio)
        )

        if step_index == cutoff_step:
            if pipeline.do_classifier_free_guidance:
                prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
            callback_kwargs[self.tensor_inputs[0]] = prompt_embeds
        return callback_kwargs
283
284
285
286
287
288
289
290
291
292
293


pipeline: StableDiffusionPipeline = StableDiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True,
).to("cuda")
pipeline.safety_checker = None
pipeline.requires_safety_checker = False

294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
callback = MultiPipelineCallbacks(
    [
        SDPromptSchedulingCallback(
            encoded_prompt=pipeline.encode_prompt(
                prompt=f"prompt {index}",
                negative_prompt=f"negative prompt {index}",
                device=pipeline._execution_device,
                num_images_per_prompt=1,
                # pipeline.do_classifier_free_guidance can't be accessed until after pipeline is ran
                do_classifier_free_guidance=True,
            ),
            cutoff_step_index=index,
        ) for index in range(1, 20)
    ]
)

image = pipeline(
    prompt="prompt"
    negative_prompt="negative prompt",
    callback_on_step_end=callback,
    callback_on_step_end_tensor_inputs=["prompt_embeds"],
).images[0]
torch.cuda.empty_cache()
image.save('image.png')
```
319

320
321
322
323
324
325
326
327
328
```python
from diffusers import StableDiffusionXLPipeline
from diffusers.callbacks import PipelineCallback, MultiPipelineCallbacks
from diffusers.configuration_utils import register_to_config
import torch
from typing import Any, Dict, Tuple, Union


class SDXLPromptSchedulingCallback(PipelineCallback):
329
330
331
    @register_to_config
    def __init__(
        self,
332
333
334
335
        encoded_prompt: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
        add_text_embeds: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
        add_time_ids: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
        cutoff_step_ratio=None,
336
337
338
339
340
341
        cutoff_step_index=None,
    ):
        super().__init__(
            cutoff_step_ratio=cutoff_step_ratio, cutoff_step_index=cutoff_step_index
        )

342
    tensor_inputs = ["prompt_embeds", "add_text_embeds", "add_time_ids"]
343
344
345
346
347
348

    def callback_fn(
        self, pipeline, step_index, timestep, callback_kwargs
    ) -> Dict[str, Any]:
        cutoff_step_ratio = self.config.cutoff_step_ratio
        cutoff_step_index = self.config.cutoff_step_index
349
350
351
352
353
354
355
356
357
358
359
360
        if isinstance(self.config.encoded_prompt, tuple):
            prompt_embeds, negative_prompt_embeds = self.config.encoded_prompt
        else:
            prompt_embeds = self.config.encoded_prompt
        if isinstance(self.config.add_text_embeds, tuple):
            add_text_embeds, negative_add_text_embeds = self.config.add_text_embeds
        else:
            add_text_embeds = self.config.add_text_embeds
        if isinstance(self.config.add_time_ids, tuple):
            add_time_ids, negative_add_time_ids = self.config.add_time_ids
        else:
            add_time_ids = self.config.add_time_ids
361
362
363
364
365
366
367
368
369
370
371

        # Use cutoff_step_index if it's not None, otherwise use cutoff_step_ratio
        cutoff_step = (
            cutoff_step_index
            if cutoff_step_index is not None
            else int(pipeline.num_timesteps * cutoff_step_ratio)
        )

        if step_index == cutoff_step:
            if pipeline.do_classifier_free_guidance:
                prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
372
373
                add_text_embeds = torch.cat([negative_add_text_embeds, add_text_embeds])
                add_time_ids = torch.cat([negative_add_time_ids, add_time_ids])
374
            callback_kwargs[self.tensor_inputs[0]] = prompt_embeds
375
376
            callback_kwargs[self.tensor_inputs[1]] = add_text_embeds
            callback_kwargs[self.tensor_inputs[2]] = add_time_ids
377
378
        return callback_kwargs

379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422

pipeline: StableDiffusionXLPipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True,
).to("cuda")

callbacks = []
for index in range(1, 20):
    (
        prompt_embeds,
        negative_prompt_embeds,
        pooled_prompt_embeds,
        negative_pooled_prompt_embeds,
    ) = pipeline.encode_prompt(
        prompt=f"prompt {index}",
        negative_prompt=f"prompt {index}",
        device=pipeline._execution_device,
        num_images_per_prompt=1,
        # pipeline.do_classifier_free_guidance can't be accessed until after pipeline is ran
        do_classifier_free_guidance=True,
    )
    text_encoder_projection_dim = int(pooled_prompt_embeds.shape[-1])
    add_time_ids = pipeline._get_add_time_ids(
        (1024, 1024),
        (0, 0),
        (1024, 1024),
        dtype=prompt_embeds.dtype,
        text_encoder_projection_dim=text_encoder_projection_dim,
    )
    negative_add_time_ids = pipeline._get_add_time_ids(
        (1024, 1024),
        (0, 0),
        (1024, 1024),
        dtype=prompt_embeds.dtype,
        text_encoder_projection_dim=text_encoder_projection_dim,
    )
    callbacks.append(
        SDXLPromptSchedulingCallback(
            encoded_prompt=(prompt_embeds, negative_prompt_embeds),
            add_text_embeds=(pooled_prompt_embeds, negative_pooled_prompt_embeds),
            add_time_ids=(add_time_ids, negative_add_time_ids),
            cutoff_step_index=index,
423
        )
424
425
426
427
    )


callback = MultiPipelineCallbacks(callbacks)
428
429

image = pipeline(
430
431
    prompt="prompt",
    negative_prompt="negative prompt",
432
    callback_on_step_end=callback,
433
434
435
436
437
    callback_on_step_end_tensor_inputs=[
        "prompt_embeds",
        "add_text_embeds",
        "add_time_ids",
    ],
438
439
).images[0]
```