"...text-generation-inference.git" did not exist on "b7ffa287f228e065c45a99684e73b862a5166fac"
Unverified Commit 4b50ecce authored by Patrick von Platen's avatar Patrick von Platen Committed by GitHub
Browse files

Correct sdxl docs (#4058)

parent 99b540b0
...@@ -134,19 +134,19 @@ image = refiner(prompt=prompt, num_inference_steps=n_steps, denoising_start=high ...@@ -134,19 +134,19 @@ image = refiner(prompt=prompt, num_inference_steps=n_steps, denoising_start=high
Let's have a look at the image Let's have a look at the image
![lion_ref](https://huggingface.co/datasets/huggingface/documentation-images/blob/main/diffusers/lion_refined.png) | Original Image | Ensemble of Denoisers Experts |
|---|---|
| ![lion_base](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_base.png) | ![lion_ref](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_refined.png)
If we would have just run the base model on the same 40 steps, the image would have been arguably less detailed (e.g. the lion eyes and nose): If we would have just run the base model on the same 40 steps, the image would have been arguably less detailed (e.g. the lion eyes and nose):
![lion_base](https://huggingface.co/datasets/huggingface/documentation-images/blob/main/diffusers/lion_base.png)
<Tip> <Tip>
The ensemble-of-experts method works well on all available schedulers! The ensemble-of-experts method works well on all available schedulers!
</Tip> </Tip>
#### Refining the image output from fully denoised base image #### 2.) Refining the image output from fully denoised base image
In standard [`StableDiffusionImg2ImgPipeline`]-fashion, the fully-denoised image generated of the base model In standard [`StableDiffusionImg2ImgPipeline`]-fashion, the fully-denoised image generated of the base model
can be further improved using the [refiner checkpoint](huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9). can be further improved using the [refiner checkpoint](huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9).
...@@ -179,6 +179,10 @@ image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").imag ...@@ -179,6 +179,10 @@ image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").imag
image = refiner(prompt=prompt, image=image[None, :]).images[0] image = refiner(prompt=prompt, image=image[None, :]).images[0]
``` ```
| Original Image | Refined Image |
|---|---|
| ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/init_image.png) | ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/refined_image.png) |
### Image-to-image ### Image-to-image
```py ```py
...@@ -197,10 +201,6 @@ prompt = "a photo of an astronaut riding a horse on mars" ...@@ -197,10 +201,6 @@ prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt, image=init_image).images[0] image = pipe(prompt, image=init_image).images[0]
``` ```
| Original Image | Refined Image |
|---|---|
| ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/init_image.png) | ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/refined_image.png) |
### Loading single file checkpoints / original file format ### Loading single file checkpoints / original file format
By making use of [`~diffusers.loaders.FromSingleFileMixin.from_single_file`] you can also load the By making use of [`~diffusers.loaders.FromSingleFileMixin.from_single_file`] you can also load the
...@@ -210,13 +210,13 @@ original file format into `diffusers`: ...@@ -210,13 +210,13 @@ original file format into `diffusers`:
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch import torch
pipe = StableDiffusionXLPipeline.from_pretrained( pipe = StableDiffusionXLPipeline.from_single_file(
"stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16", use_safetensors=True "./sd_xl_base_0.9.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
) )
pipe.to("cuda") pipe.to("cuda")
refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( refiner = StableDiffusionXLImg2ImgPipeline.from_single_file(
"stabilityai/stable-diffusion-xl-refiner-0.9", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" "./sd_xl_refiner_0.9.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
) )
refiner.to("cuda") refiner.to("cuda")
``` ```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment