Unverified Commit 7fe638c5 authored by Will Berman's avatar Will Berman Committed by GitHub
Browse files

update paint by example docs (#2598)

parent c812d97d
...@@ -136,7 +136,7 @@ def prepare_mask_and_masked_image(image, mask): ...@@ -136,7 +136,7 @@ def prepare_mask_and_masked_image(image, mask):
class PaintByExamplePipeline(DiffusionPipeline): class PaintByExamplePipeline(DiffusionPipeline):
r""" r"""
Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*. Pipeline for image-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
...@@ -144,10 +144,8 @@ class PaintByExamplePipeline(DiffusionPipeline): ...@@ -144,10 +144,8 @@ class PaintByExamplePipeline(DiffusionPipeline):
Args: Args:
vae ([`AutoencoderKL`]): vae ([`AutoencoderKL`]):
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
text_encoder ([`CLIPTextModel`]): image_encoder ([`PaintByExampleImageEncoder`]):
Frozen text-encoder. Stable Diffusion uses the text portion of Encodes the example input image. The unet is conditioned on the example image instead of a text prompt.
[CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
tokenizer (`CLIPTokenizer`): tokenizer (`CLIPTokenizer`):
Tokenizer of class Tokenizer of class
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment