Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
83f8a5ff
Unverified
Commit
83f8a5ff
authored
Oct 20, 2022
by
Patrick von Platen
Committed by
GitHub
Oct 20, 2022
Browse files
[Stable Diffusion] Add components function (#889)
* [Stable Diffusion] Add components function * uP
parent
2a0c8235
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
112 additions
and
1 deletion
+112
-1
docs/source/api/diffusion_pipeline.mdx
docs/source/api/diffusion_pipeline.mdx
+3
-0
docs/source/api/pipelines/stable_diffusion.mdx
docs/source/api/pipelines/stable_diffusion.mdx
+20
-0
src/diffusers/pipeline_utils.py
src/diffusers/pipeline_utils.py
+36
-1
tests/test_pipelines.py
tests/test_pipelines.py
+53
-0
No files found.
docs/source/api/diffusion_pipeline.mdx
View file @
83f8a5ff
...
@@ -32,6 +32,9 @@ Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrain
...
@@ -32,6 +32,9 @@ Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrain
[[autodoc]] DiffusionPipeline
[[autodoc]] DiffusionPipeline
- from_pretrained
- from_pretrained
- save_pretrained
- save_pretrained
- to
- device
- components
## ImagePipelineOutput
## ImagePipelineOutput
By default diffusion pipelines return an object of class
By default diffusion pipelines return an object of class
...
...
docs/source/api/pipelines/stable_diffusion.mdx
View file @
83f8a5ff
...
@@ -17,6 +17,26 @@ For more details about how Stable Diffusion works and how it differs from the ba
...
@@ -17,6 +17,26 @@ For more details about how Stable Diffusion works and how it differs from the ba
| [pipeline_stable_diffusion_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
| [pipeline_stable_diffusion_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
| [pipeline_stable_diffusion_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | **Experimental** – *Text-Guided Image Inpainting* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
| [pipeline_stable_diffusion_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | **Experimental** – *Text-Guided Image Inpainting* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
## Tips
If you want to use all possible use cases in a single `DiffusionPipeline` you can either:
- Make use of the [Stable Diffusion Mega Pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community#stable-diffusion-mega) or
- Make use of the `components` functionality to instantiate all components in the most memory-efficient way:
```python
>>> from diffusers import (
... StableDiffusionPipeline,
... StableDiffusionImg2ImgPipeline,
... StableDiffusionInpaintPipeline,
... )
>>> img2text = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> img2img = StableDiffusionImg2ImgPipeline(**img2text.components)
>>> inpaint = StableDiffusionInpaintPipeline(**img2text.components)
>>> # now you can use img2text(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline
```
## StableDiffusionPipelineOutput
## StableDiffusionPipelineOutput
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
...
...
src/diffusers/pipeline_utils.py
View file @
83f8a5ff
...
@@ -18,7 +18,7 @@ import importlib
...
@@ -18,7 +18,7 @@ import importlib
import
inspect
import
inspect
import
os
import
os
from
dataclasses
import
dataclass
from
dataclasses
import
dataclass
from
typing
import
List
,
Optional
,
Union
from
typing
import
Any
,
Dict
,
List
,
Optional
,
Union
import
numpy
as
np
import
numpy
as
np
import
torch
import
torch
...
@@ -561,6 +561,41 @@ class DiffusionPipeline(ConfigMixin):
...
@@ -561,6 +561,41 @@ class DiffusionPipeline(ConfigMixin):
model
=
pipeline_class
(
**
init_kwargs
)
model
=
pipeline_class
(
**
init_kwargs
)
return
model
return
model
@
property
def
components
(
self
)
->
Dict
[
str
,
Any
]:
r
"""
The `self.compenents` property can be useful to run different pipelines with the same weights and
configurations to not have to re-allocate memory.
Examples:
```py
>>> from diffusers import (
... StableDiffusionPipeline,
... StableDiffusionImg2ImgPipeline,
... StableDiffusionInpaintPipeline,
... )
>>> img2text = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> img2img = StableDiffusionImg2ImgPipeline(**img2text.components)
>>> inpaint = StableDiffusionInpaintPipeline(**img2text.components)
```
Returns:
A dictionaly containing all the modules needed to initialize the pipleline.
"""
components
=
{
k
:
getattr
(
self
,
k
)
for
k
in
self
.
config
.
keys
()
if
not
k
.
startswith
(
"_"
)}
expected_modules
=
set
(
inspect
.
signature
(
self
.
__init__
).
parameters
.
keys
())
-
set
([
"self"
])
if
set
(
components
.
keys
())
!=
expected_modules
:
raise
ValueError
(
f
"
{
self
}
has been incorrectly initialized or
{
self
.
__class__
}
is incorrectly implemented. Expected"
f
"
{
expected_modules
}
to be defined, but
{
components
}
are defined."
)
return
components
@
staticmethod
@
staticmethod
def
numpy_to_pil
(
images
):
def
numpy_to_pil
(
images
):
"""
"""
...
...
tests/test_pipelines.py
View file @
83f8a5ff
...
@@ -1391,6 +1391,59 @@ class PipelineFastTests(unittest.TestCase):
...
@@ -1391,6 +1391,59 @@ class PipelineFastTests(unittest.TestCase):
assert
image
.
shape
==
(
1
,
128
,
128
,
3
)
assert
image
.
shape
==
(
1
,
128
,
128
,
3
)
def
test_components
(
self
):
"""Test that components property works correctly"""
unet
=
self
.
dummy_cond_unet
scheduler
=
PNDMScheduler
(
skip_prk_steps
=
True
)
vae
=
self
.
dummy_vae
bert
=
self
.
dummy_text_encoder
tokenizer
=
CLIPTokenizer
.
from_pretrained
(
"hf-internal-testing/tiny-random-clip"
)
image
=
self
.
dummy_image
.
cpu
().
permute
(
0
,
2
,
3
,
1
)[
0
]
init_image
=
Image
.
fromarray
(
np
.
uint8
(
image
)).
convert
(
"RGB"
)
mask_image
=
Image
.
fromarray
(
np
.
uint8
(
image
+
4
)).
convert
(
"RGB"
).
resize
((
128
,
128
))
# make sure here that pndm scheduler skips prk
inpaint
=
StableDiffusionInpaintPipeline
(
unet
=
unet
,
scheduler
=
scheduler
,
vae
=
vae
,
text_encoder
=
bert
,
tokenizer
=
tokenizer
,
safety_checker
=
self
.
dummy_safety_checker
,
feature_extractor
=
self
.
dummy_extractor
,
)
img2img
=
StableDiffusionImg2ImgPipeline
(
**
inpaint
.
components
)
text2img
=
StableDiffusionPipeline
(
**
inpaint
.
components
)
prompt
=
"A painting of a squirrel eating a burger"
generator
=
torch
.
Generator
(
device
=
torch_device
).
manual_seed
(
0
)
image_inpaint
=
inpaint
(
[
prompt
],
generator
=
generator
,
num_inference_steps
=
2
,
output_type
=
"np"
,
init_image
=
init_image
,
mask_image
=
mask_image
,
).
images
image_img2img
=
img2img
(
[
prompt
],
generator
=
generator
,
num_inference_steps
=
2
,
output_type
=
"np"
,
init_image
=
init_image
,
).
images
image_text2img
=
text2img
(
[
prompt
],
generator
=
generator
,
num_inference_steps
=
2
,
output_type
=
"np"
,
).
images
assert
image_inpaint
.
shape
==
(
1
,
32
,
32
,
3
)
assert
image_img2img
.
shape
==
(
1
,
32
,
32
,
3
)
assert
image_text2img
.
shape
==
(
1
,
128
,
128
,
3
)
class
PipelineTesterMixin
(
unittest
.
TestCase
):
class
PipelineTesterMixin
(
unittest
.
TestCase
):
def
tearDown
(
self
):
def
tearDown
(
self
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment