Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
6a6dfe1c
Unverified
Commit
6a6dfe1c
authored
Jul 26, 2023
by
Patrick von Platen
Committed by
GitHub
Jul 26, 2023
Browse files
Rename (#4294)
* up * Apply suggestions from code review * Apply suggestions from code review * up
parent
b83bdce4
Changes
9
Hide whitespace changes
Inline
Side-by-side
Showing
9 changed files
with
31 additions
and
31 deletions
+31
-31
docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_xl.md
.../en/api/pipelines/stable_diffusion/stable_diffusion_xl.md
+17
-17
examples/controlnet/README_sdxl.md
examples/controlnet/README_sdxl.md
+2
-2
examples/controlnet/train_controlnet_sdxl.py
examples/controlnet/train_controlnet_sdxl.py
+1
-1
examples/dreambooth/README_sdxl.md
examples/dreambooth/README_sdxl.md
+3
-3
examples/dreambooth/train_dreambooth_lora_sdxl.py
examples/dreambooth/train_dreambooth_lora_sdxl.py
+1
-1
examples/instruct_pix2pix/README_sdxl.md
examples/instruct_pix2pix/README_sdxl.md
+4
-4
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
...lines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
+1
-1
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
...able_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
+1
-1
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py
...able_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py
+1
-1
No files found.
docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_xl.md
View file @
6a6dfe1c
...
...
@@ -26,8 +26,8 @@ The abstract of the paper is the following:
### Available checkpoints:
-
*Text-to-Image (1024x1024 resolution)*
:
[
stabilityai/stable-diffusion-xl-base-
0.9
](
https://huggingface.co/stabilityai/stable-diffusion-xl-base-
0.9
)
with [
`StableDiffusionXLPipeline`
]
-
*Image-to-Image / Refiner (1024x1024 resolution)*
:
[
stabilityai/stable-diffusion-xl-refiner-
0.9
](
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-
0.9
)
with [
`StableDiffusionXLImg2ImgPipeline`
]
-
*Text-to-Image (1024x1024 resolution)*
:
[
stabilityai/stable-diffusion-xl-base-
1.0
](
https://huggingface.co/stabilityai/stable-diffusion-xl-base-
1.0
)
with [
`StableDiffusionXLPipeline`
]
-
*Image-to-Image / Refiner (1024x1024 resolution)*
:
[
stabilityai/stable-diffusion-xl-refiner-
1.0
](
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-
1.0
)
with [
`StableDiffusionXLImg2ImgPipeline`
]
## Usage Example
...
...
@@ -50,7 +50,7 @@ from diffusers import StableDiffusionXLPipeline
import
torch
pipe
=
StableDiffusionXLPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-base-
0.9
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"stabilityai/stable-diffusion-xl-base-
1.0
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
.
to
(
"cuda"
)
...
...
@@ -68,7 +68,7 @@ from diffusers import StableDiffusionXLImg2ImgPipeline
from
diffusers.utils
import
load_image
pipe
=
StableDiffusionXLImg2ImgPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-refiner-
0.9
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"stabilityai/stable-diffusion-xl-refiner-
1.0
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
=
pipe
.
to
(
"cuda"
)
url
=
"https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"
...
...
@@ -88,7 +88,7 @@ from diffusers import StableDiffusionXLInpaintPipeline
from
diffusers.utils
import
load_image
pipe
=
StableDiffusionXLInpaintPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-base-
0.9
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"stabilityai/stable-diffusion-xl-base-
1.0
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
.
to
(
"cuda"
)
...
...
@@ -104,8 +104,8 @@ image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inferen
### Refining the image output
In addition to the
[
base model checkpoint
](
https://huggingface.co/stabilityai/stable-diffusion-xl-base-
0.9
)
,
StableDiffusion-XL also includes a
[
refiner checkpoint
](
huggingface.co/stabilityai/stable-diffusion-xl-refiner-
0.9
)
In addition to the
[
base model checkpoint
](
https://huggingface.co/stabilityai/stable-diffusion-xl-base-
1.0
)
,
StableDiffusion-XL also includes a
[
refiner checkpoint
](
huggingface.co/stabilityai/stable-diffusion-xl-refiner-
1.0
)
that is specialized in denoising low-noise stage images to generate images of improved high-frequency quality.
This refiner checkpoint can be used as a "second-step" pipeline after having run the base checkpoint to improve
image quality.
...
...
@@ -149,12 +149,12 @@ from diffusers import DiffusionPipeline
import
torch
base
=
DiffusionPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-base-
0.9
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"stabilityai/stable-diffusion-xl-base-
1.0
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
.
to
(
"cuda"
)
refiner
=
DiffusionPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-refiner-
0.9
"
,
"stabilityai/stable-diffusion-xl-refiner-
1.0
"
,
text_encoder_2
=
base
.
text_encoder_2
,
vae
=
base
.
vae
,
torch_dtype
=
torch
.
float16
,
...
...
@@ -219,7 +219,7 @@ The ensemble-of-experts method works well on all available schedulers!
#### 2.) Refining the image output from fully denoised base image
In standard [
`StableDiffusionImg2ImgPipeline`
]-fashion, the fully-denoised image generated of the base model
can be further improved using the
[
refiner checkpoint
](
huggingface.co/stabilityai/stable-diffusion-xl-refiner-
0.9
)
.
can be further improved using the
[
refiner checkpoint
](
huggingface.co/stabilityai/stable-diffusion-xl-refiner-
1.0
)
.
For this, you simply run the refiner as a normal image-to-image pipeline after the "base" text-to-image
pipeline. You can leave the outputs of the base model in latent space.
...
...
@@ -229,12 +229,12 @@ from diffusers import DiffusionPipeline
import
torch
pipe
=
DiffusionPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-base-
0.9
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"stabilityai/stable-diffusion-xl-base-
1.0
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
.
to
(
"cuda"
)
refiner
=
DiffusionPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-refiner-
0.9
"
,
"stabilityai/stable-diffusion-xl-refiner-
1.0
"
,
text_encoder_2
=
pipe
.
text_encoder_2
,
vae
=
pipe
.
vae
,
torch_dtype
=
torch
.
float16
,
...
...
@@ -267,12 +267,12 @@ from diffusers import StableDiffusionXLInpaintPipeline
from
diffusers.utils
import
load_image
pipe
=
StableDiffusionXLInpaintPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-base-
0.9
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"stabilityai/stable-diffusion-xl-base-
1.0
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
.
to
(
"cuda"
)
refiner
=
StableDiffusionXLInpaintPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-refiner-
0.9
"
,
"stabilityai/stable-diffusion-xl-refiner-
1.0
"
,
text_encoder_2
=
pipe
.
text_encoder_2
,
vae
=
pipe
.
vae
,
torch_dtype
=
torch
.
float16
,
...
...
@@ -321,12 +321,12 @@ from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipelin
import
torch
pipe
=
StableDiffusionXLPipeline
.
from_single_file
(
"./sd_xl_base_
0.9
.safetensors"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"./sd_xl_base_
1.0
.safetensors"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
.
to
(
"cuda"
)
refiner
=
StableDiffusionXLImg2ImgPipeline
.
from_single_file
(
"./sd_xl_refiner_
0.9
.safetensors"
,
torch_dtype
=
torch
.
float16
,
use_safetensors
=
True
,
variant
=
"fp16"
"./sd_xl_refiner_
1.0
.safetensors"
,
torch_dtype
=
torch
.
float16
,
use_safetensors
=
True
,
variant
=
"fp16"
)
refiner
.
to
(
"cuda"
)
```
...
...
@@ -399,7 +399,7 @@ from diffusers import StableDiffusionXLPipeline
import
torch
pipe
=
StableDiffusionXLPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-base-
0.9
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
"stabilityai/stable-diffusion-xl-base-
1.0
"
,
torch_dtype
=
torch
.
float16
,
variant
=
"fp16"
,
use_safetensors
=
True
)
pipe
.
to
(
"cuda"
)
...
...
examples/controlnet/README_sdxl.md
View file @
6a6dfe1c
...
...
@@ -61,7 +61,7 @@ wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/ma
Then run
`huggingface-cli login`
to log into your Hugging Face account. This is needed to be able to push the trained ControlNet parameters to Hugging Face Hub.
```
bash
export
MODEL_DIR
=
"stabilityai/stable-diffusion-xl-base-
0.9
"
export
MODEL_DIR
=
"stabilityai/stable-diffusion-xl-base-
1.0
"
export
OUTPUT_DIR
=
"path to save model"
accelerate launch train_controlnet_sdxl.py
\
...
...
@@ -98,7 +98,7 @@ from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, UniP
from
diffusers.utils
import
load_image
import
torch
base_model_path
=
"stabilityai/stable-diffusion-xl-base-
0.9
"
base_model_path
=
"stabilityai/stable-diffusion-xl-base-
1.0
"
controlnet_path
=
"path to controlnet"
controlnet
=
ControlNetModel
.
from_pretrained
(
controlnet_path
,
torch_dtype
=
torch
.
float16
)
...
...
examples/controlnet/train_controlnet_sdxl.py
View file @
6a6dfe1c
...
...
@@ -231,7 +231,7 @@ These are controlnet weights trained on {base_model} with new type of conditioni
## License
[SDXL
0.9 Research
License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-
0.9
/blob/main/LICENSE.md)
[SDXL
1.0
License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-
1.0
/blob/main/LICENSE.md)
"""
with
open
(
os
.
path
.
join
(
repo_folder
,
"README.md"
),
"w"
)
as
f
:
f
.
write
(
yaml
+
model_card
)
...
...
examples/dreambooth/README_sdxl.md
View file @
6a6dfe1c
...
...
@@ -76,7 +76,7 @@ This will also allow us to push the trained LoRA parameters to the Hugging Face
Now, we can launch training using:
```
bash
export
MODEL_NAME
=
"stabilityai/stable-diffusion-xl-base-
0.9
"
export
MODEL_NAME
=
"stabilityai/stable-diffusion-xl-base-
1.0
"
export
INSTANCE_DIR
=
"dog"
export
OUTPUT_DIR
=
"lora-trained-xl"
...
...
@@ -127,7 +127,7 @@ image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).image
image
.
save
(
"sks_dog.png"
)
```
We can further refine the outputs with the
[
Refiner
](
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-
0.9
)
:
We can further refine the outputs with the
[
Refiner
](
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-
1.0
)
:
```
python
from
huggingface_hub.repocard
import
RepoCard
...
...
@@ -145,7 +145,7 @@ pipe.load_lora_weights(lora_model_id)
# Load the refiner.
refiner
=
StableDiffusionXLImg2ImgPipeline
.
from_pretrained
(
"stabilityai/stable-diffusion-xl-refiner-
0.9
"
,
torch_dtype
=
torch
.
float16
,
use_safetensors
=
True
,
variant
=
"fp16"
"stabilityai/stable-diffusion-xl-refiner-
1.0
"
,
torch_dtype
=
torch
.
float16
,
use_safetensors
=
True
,
variant
=
"fp16"
)
refiner
.
to
(
"cuda"
)
...
...
examples/dreambooth/train_dreambooth_lora_sdxl.py
View file @
6a6dfe1c
...
...
@@ -97,7 +97,7 @@ Special VAE used for training: {vae_path}.
## License
[SDXL
0.9 Research
License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-
0.9
/blob/main/LICENSE.md)
[SDXL
1.0
License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-
1.0
/blob/main/LICENSE.md)
"""
with
open
(
os
.
path
.
join
(
repo_folder
,
"README.md"
),
"w"
)
as
f
:
f
.
write
(
yaml
+
model_card
)
...
...
examples/instruct_pix2pix/README_sdxl.md
View file @
6a6dfe1c
...
...
@@ -15,7 +15,7 @@ training procedure while being faithful to the [original implementation](https:/
Refer to the original InstructPix2Pix training example for installing the dependencies.
You will also need to get access of SDXL by filling the
[
form
](
https://huggingface.co/stabilityai/stable-diffusion-xl-base-
0.9
)
.
You will also need to get access of SDXL by filling the
[
form
](
https://huggingface.co/stabilityai/stable-diffusion-xl-base-
1.0
)
.
### Toy example
...
...
@@ -26,7 +26,7 @@ Configure environment variables such as the dataset identifier and the Stable Di
checkpoint:
```
bash
export
MODEL_NAME
=
"stabilityai/stable-diffusion-xl-base-
0.9
"
export
MODEL_NAME
=
"stabilityai/stable-diffusion-xl-base-
1.0
"
export
DATASET_ID
=
"fusing/instructpix2pix-1000-samples"
```
...
...
@@ -51,7 +51,7 @@ with Weights and Biases. You can enable this feature with `report_to="wandb"`:
```
bash
python train_instruct_pix2pix_xl.py
\
--pretrained_model_name_or_path
=
stabilityai/stable-diffusion-xl-base-
0.9
\
--pretrained_model_name_or_path
=
stabilityai/stable-diffusion-xl-base-
1.0
\
--dataset_name
=
$DATASET_ID
\
--use_ema
\
--enable_xformers_memory_efficient_attention
\
...
...
@@ -80,7 +80,7 @@ for running distributed training with `accelerate`. Here is an example command:
```
bash
accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix.py
\
--pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-
0.9
\
--pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-
1.0
\
--dataset_name=$DATASET_ID
\
--use_ema
\
--enable_xformers_memory_efficient_attention
\
...
...
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
View file @
6a6dfe1c
...
...
@@ -50,7 +50,7 @@ EXAMPLE_DOC_STRING = """
>>> from diffusers import StableDiffusionXLPipeline
>>> pipe = StableDiffusionXLPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-
0.9
", torch_dtype=torch.float16
... "stabilityai/stable-diffusion-xl-base-
1.0
", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
...
...
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
View file @
6a6dfe1c
...
...
@@ -52,7 +52,7 @@ EXAMPLE_DOC_STRING = """
>>> from diffusers.utils import load_image
>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-refiner-
0.9
", torch_dtype=torch.float16
... "stabilityai/stable-diffusion-xl-refiner-
1.0
", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"
...
...
src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py
View file @
6a6dfe1c
...
...
@@ -47,7 +47,7 @@ EXAMPLE_DOC_STRING = """
>>> from diffusers.utils import load_image
>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-
0.9
",
... "stabilityai/stable-diffusion-xl-base-
1.0
",
... torch_dtype=torch.float16,
... variant="fp16",
... use_safetensors=True,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment