img2img.mdx 3.69 KB
Newer Older
Patrick von Platen's avatar
Patrick von Platen committed
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Nathan Lambert's avatar
Nathan Lambert committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# Text-guided image-to-image generation
Patrick von Platen's avatar
Patrick von Platen committed
14

YiYi Xu's avatar
YiYi Xu committed
15
16
[[open-in-colab]]

17
The [`StableDiffusionImg2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images.
YiYi Xu's avatar
YiYi Xu committed
18
19
20
21
22
23
24

Before you begin, make sure you have all the necessary libraries installed:

```bash
!pip install diffusers transformers ftfy accelerate
```

25
Get started by creating a [`StableDiffusionImg2ImgPipeline`] with a pretrained Stable Diffusion model like [`nitrosocke/Ghibli-Diffusion`](https://huggingface.co/nitrosocke/Ghibli-Diffusion).
Patrick von Platen's avatar
Patrick von Platen committed
26

27
```python
Patrick von Platen's avatar
Patrick von Platen committed
28
import torch
29
30
31
32
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline
Patrick von Platen's avatar
Patrick von Platen committed
33

34
device = "cuda"
35
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16).to(
36
37
    device
)
YiYi Xu's avatar
YiYi Xu committed
38
```
Patrick von Platen's avatar
Patrick von Platen committed
39

40
Download and preprocess an initial image so you can pass it to the pipeline:
YiYi Xu's avatar
YiYi Xu committed
41
42

```python
43
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
Patrick von Platen's avatar
Patrick von Platen committed
44

45
46
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
47
init_image.thumbnail((768, 768))
YiYi Xu's avatar
YiYi Xu committed
48
49
50
init_image
```

51
52
53
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg"/>
</div>
YiYi Xu's avatar
YiYi Xu committed
54
55

<Tip>
Patrick von Platen's avatar
Patrick von Platen committed
56

57
💡 `strength` is a value between 0.0 and 1.0 that controls the amount of noise added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
Patrick von Platen's avatar
Patrick von Platen committed
58

YiYi Xu's avatar
YiYi Xu committed
59
60
</Tip>

61
Define the prompt (for this checkpoint finetuned on Ghibli-style art, you need to prefix the prompt with the `ghibli style` tokens) and run the pipeline:
YiYi Xu's avatar
YiYi Xu committed
62
63

```python
64
prompt = "ghibli style, a fantasy landscape with castles"
YiYi Xu's avatar
YiYi Xu committed
65
66
67
68
69
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image
```

70
71
72
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ghibli-castles.png"/>
</div>
YiYi Xu's avatar
YiYi Xu committed
73

74
You can also try experimenting with a different scheduler to see how that affects the output:
YiYi Xu's avatar
YiYi Xu committed
75
76
77
78
79
80
81
82
83
84
85

```python
from diffusers import LMSDiscreteScheduler

lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler = lms
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image
```

86
87
88
89
90
91
92
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lms-ghibli.png"/>
</div>

Check out the Spaces below, and try generating images with different values for `strength`. You'll notice that using lower values for `strength` produces images that are more similar to the original image.

Feel free to also switch the scheduler to the [`LMSDiscreteScheduler`] and see how that affects the output.
Patrick von Platen's avatar
Patrick von Platen committed
93

94
95
96
97
98
99
<iframe
	src="https://stevhliu-ghibli-img2img.hf.space"
	frameborder="0"
	width="850"
	height="500"
></iframe>