img2img.md 3.75 KB
Newer Older
Patrick von Platen's avatar
Patrick von Platen committed
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Nathan Lambert's avatar
Nathan Lambert committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# Text-guided image-to-image generation
Patrick von Platen's avatar
Patrick von Platen committed
14

YiYi Xu's avatar
YiYi Xu committed
15
16
[[open-in-colab]]

17
The [`StableDiffusionImg2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images.
YiYi Xu's avatar
YiYi Xu committed
18
19
20

Before you begin, make sure you have all the necessary libraries installed:

21
22
23
```py
# uncomment to install the necessary libraries in Colab
#!pip install diffusers transformers ftfy accelerate
YiYi Xu's avatar
YiYi Xu committed
24
25
```

26
Get started by creating a [`StableDiffusionImg2ImgPipeline`] with a pretrained Stable Diffusion model like [`nitrosocke/Ghibli-Diffusion`](https://huggingface.co/nitrosocke/Ghibli-Diffusion).
Patrick von Platen's avatar
Patrick von Platen committed
27

28
```python
Patrick von Platen's avatar
Patrick von Platen committed
29
import torch
30
31
32
33
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline
Patrick von Platen's avatar
Patrick von Platen committed
34

35
device = "cuda"
36
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16).to(
37
38
    device
)
YiYi Xu's avatar
YiYi Xu committed
39
```
Patrick von Platen's avatar
Patrick von Platen committed
40

41
Download and preprocess an initial image so you can pass it to the pipeline:
YiYi Xu's avatar
YiYi Xu committed
42
43

```python
44
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
Patrick von Platen's avatar
Patrick von Platen committed
45

46
47
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
48
init_image.thumbnail((768, 768))
YiYi Xu's avatar
YiYi Xu committed
49
50
51
init_image
```

52
53
54
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg"/>
</div>
YiYi Xu's avatar
YiYi Xu committed
55
56

<Tip>
Patrick von Platen's avatar
Patrick von Platen committed
57

58
💡 `strength` is a value between 0.0 and 1.0 that controls the amount of noise added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
Patrick von Platen's avatar
Patrick von Platen committed
59

YiYi Xu's avatar
YiYi Xu committed
60
61
</Tip>

62
Define the prompt (for this checkpoint finetuned on Ghibli-style art, you need to prefix the prompt with the `ghibli style` tokens) and run the pipeline:
YiYi Xu's avatar
YiYi Xu committed
63
64

```python
65
prompt = "ghibli style, a fantasy landscape with castles"
YiYi Xu's avatar
YiYi Xu committed
66
67
68
69
70
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image
```

71
72
73
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ghibli-castles.png"/>
</div>
YiYi Xu's avatar
YiYi Xu committed
74

75
You can also try experimenting with a different scheduler to see how that affects the output:
YiYi Xu's avatar
YiYi Xu committed
76
77
78
79
80
81
82
83
84
85
86

```python
from diffusers import LMSDiscreteScheduler

lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler = lms
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image
```

87
88
89
90
91
92
93
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lms-ghibli.png"/>
</div>

Check out the Spaces below, and try generating images with different values for `strength`. You'll notice that using lower values for `strength` produces images that are more similar to the original image.

Feel free to also switch the scheduler to the [`LMSDiscreteScheduler`] and see how that affects the output.
Patrick von Platen's avatar
Patrick von Platen committed
94

95
96
97
98
99
100
<iframe
	src="https://stevhliu-ghibli-img2img.hf.space"
	frameborder="0"
	width="850"
	height="500"
></iframe>