stable_unclip.md 5.67 KB
Newer Older
Patrick von Platen's avatar
Patrick von Platen committed
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Will Berman's avatar
Will Berman committed
2
3
4
5
6
7
8
9
10
11
12
13
14

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Stable unCLIP

15
16
Stable unCLIP checkpoints are finetuned from [Stable Diffusion 2.1](./stable_diffusion/stable_diffusion_2) checkpoints to condition on CLIP image embeddings.
Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used
Will Berman's avatar
Will Berman committed
17
18
for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation.

19
The abstract from the paper is:
20

21
*Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.*
22

Will Berman's avatar
Will Berman committed
23
24
## Tips

25
Stable unCLIP takes  `noise_level` as input during inference which determines how much noise is added 
Will Berman's avatar
Will Berman committed
26
to the image embeddings. A higher `noise_level` increases variation in the final un-noised images. By default, 
27
we do not add any additional noise to the image embeddings (`noise_level = 0`).
Will Berman's avatar
Will Berman committed
28
29

### Text-to-Image Generation
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain's open source DALL-E 2 replication [Karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha)

```python
import torch
from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline
from diffusers.models import PriorTransformer
from transformers import CLIPTokenizer, CLIPTextModelWithProjection

prior_model_id = "kakaobrain/karlo-v1-alpha"
data_type = torch.float16
prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type)

prior_text_model_id = "openai/clip-vit-large-patch14"
prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id)
prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type)
prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler")
prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config)

stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small"

pipe = StableUnCLIPPipeline.from_pretrained(
    stable_unclip_model_id,
    torch_dtype=data_type,
    variant="fp16",
    prior_tokenizer=prior_tokenizer,
    prior_text_encoder=prior_text_model,
    prior=prior,
    prior_scheduler=prior_scheduler,
)

pipe = pipe.to("cuda")
wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular"

images = pipe(prompt=wave_prompt).images
images[0].save("waves.png")
```
<Tip warning={true}>
Will Berman's avatar
Will Berman committed
67

68
For text-to-image we use `stabilityai/stable-diffusion-2-1-unclip-small` as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. [stabilityai/stable-diffusion-2-1-unclip](https://hf.co/stabilityai/stable-diffusion-2-1-unclip) was trained on OpenCLIP ViT-H, so we don't recommend its use. 
Will Berman's avatar
Will Berman committed
69

70
</Tip>
Will Berman's avatar
Will Berman committed
71
72
73
74
75

### Text guided Image-to-Image Variation

```python
from diffusers import StableUnCLIPImg2ImgPipeline
76
77
from diffusers.utils import load_image
import torch
Will Berman's avatar
Will Berman committed
78
79

pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
80
81
    "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
)
Will Berman's avatar
Will Berman committed
82
83
pipe = pipe.to("cuda")

84
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
85
init_image = load_image(url)
Will Berman's avatar
Will Berman committed
86

87
images = pipe(init_image).images
88
images[0].save("variation_image.png")
89
90
91
92
93
```

Optionally, you can also pass a prompt to `pipe` such as:

```python 
Will Berman's avatar
Will Berman committed
94
95
prompt = "A fantasy landscape, trending on artstation"

96
images = pipe(init_image, prompt=prompt).images
97
98
images[0].save("variation_image_two.png")
```
99
## StableUnCLIPPipeline
Will Berman's avatar
Will Berman committed
100
101
102
103
104
105
106
107
108
109
110
111

[[autodoc]] StableUnCLIPPipeline
	- all
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
	- enable_vae_slicing
	- disable_vae_slicing
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention


112
## StableUnCLIPImg2ImgPipeline
Will Berman's avatar
Will Berman committed
113
114
115
116
117
118
119
120
121
122

[[autodoc]] StableUnCLIPImg2ImgPipeline
	- all
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
	- enable_vae_slicing
	- disable_vae_slicing
	- enable_xformers_memory_efficient_attention
	- disable_xformers_memory_efficient_attention
123
124
125
    
## ImagePipelineOutput
[[autodoc]] pipelines.ImagePipelineOutput