Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
4b50ecce
"...text-generation-inference.git" did not exist on "b7ffa287f228e065c45a99684e73b862a5166fac"
Unverified
Commit
4b50ecce
authored
Jul 12, 2023
by
Patrick von Platen
Committed by
GitHub
Jul 12, 2023
Browse files
Correct sdxl docs (#4058)
parent
99b540b0
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
12 additions
and
12 deletions
+12
-12
docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_xl.mdx
...en/api/pipelines/stable_diffusion/stable_diffusion_xl.mdx
+12
-12
No files found.
docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_xl.mdx
View file @
4b50ecce
...
@@ -134,19 +134,19 @@ image = refiner(prompt=prompt, num_inference_steps=n_steps, denoising_start=high
...
@@ -134,19 +134,19 @@ image = refiner(prompt=prompt, num_inference_steps=n_steps, denoising_start=high
Let's have a look at the image
Let's have a look at the image

| Original Image | Ensemble of Denoisers Experts |
|---|---|
|  | 
If we would have just run the base model on the same 40 steps, the image would have been arguably less detailed (e.g. the lion eyes and nose):
If we would have just run the base model on the same 40 steps, the image would have been arguably less detailed (e.g. the lion eyes and nose):

<Tip>
<Tip>
The ensemble-of-experts method works well on all available schedulers!
The ensemble-of-experts method works well on all available schedulers!
</Tip>
</Tip>
#### Refining the image output from fully denoised base image
####
2.)
Refining the image output from fully denoised base image
In standard [`StableDiffusionImg2ImgPipeline`]-fashion, the fully-denoised image generated of the base model
In standard [`StableDiffusionImg2ImgPipeline`]-fashion, the fully-denoised image generated of the base model
can be further improved using the [refiner checkpoint](huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9).
can be further improved using the [refiner checkpoint](huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9).
...
@@ -179,6 +179,10 @@ image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").imag
...
@@ -179,6 +179,10 @@ image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").imag
image = refiner(prompt=prompt, image=image[None, :]).images[0]
image = refiner(prompt=prompt, image=image[None, :]).images[0]
```
```
| Original Image | Refined Image |
|---|---|
|  |  |
### Image-to-image
### Image-to-image
```py
```py
...
@@ -197,10 +201,6 @@ prompt = "a photo of an astronaut riding a horse on mars"
...
@@ -197,10 +201,6 @@ prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt, image=init_image).images[0]
image = pipe(prompt, image=init_image).images[0]
```
```
| Original Image | Refined Image |
|---|---|
|  |  |
### Loading single file checkpoints / original file format
### Loading single file checkpoints / original file format
By making use of [`~diffusers.loaders.FromSingleFileMixin.from_single_file`] you can also load the
By making use of [`~diffusers.loaders.FromSingleFileMixin.from_single_file`] you can also load the
...
@@ -210,13 +210,13 @@ original file format into `diffusers`:
...
@@ -210,13 +210,13 @@ original file format into `diffusers`:
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch
import torch
pipe = StableDiffusionXLPipeline.from_
pretrained
(
pipe = StableDiffusionXLPipeline.from_
single_file
(
"
stabilityai/stable-diffusion-
xl
-
base
-
0.9", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
"
./sd_
xl
_
base
_
0.9
.safetensors
", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
)
pipe.to("cuda")
pipe.to("cuda")
refiner = StableDiffusionXLImg2ImgPipeline.from_
pretrained
(
refiner = StableDiffusionXLImg2ImgPipeline.from_
single_file
(
"
stabilityai/stable-diffusion-
xl
-
refiner
-
0.9", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
"
./sd_
xl
_
refiner
_
0.9
.safetensors
", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
)
)
refiner.to("cuda")
refiner.to("cuda")
```
```
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment