Unverified Commit 83ca21f5 authored by Sayak Paul's avatar Sayak Paul Committed by GitHub
Browse files

fix: minor things in the SDXL docs. (#4070)

parent f3802eb8
...@@ -85,8 +85,8 @@ faster. The drawback is that one cannot really inspect the output of the base mo ...@@ -85,8 +85,8 @@ faster. The drawback is that one cannot really inspect the output of the base mo
To use the base model and refiner as an ensemble of expert denoisers, make sure to define the fraction To use the base model and refiner as an ensemble of expert denoisers, make sure to define the fraction
of timesteps which should be run through the high-noise denoising stage (*i.e.* the base model) and the low-noise of timesteps which should be run through the high-noise denoising stage (*i.e.* the base model) and the low-noise
denoising stage (*i.e.* the refiner model) respectively. This fraction should be set as the [`~StableDiffusionXLPipeline.__call__.denoising_end`] of the base model denoising stage (*i.e.* the refiner model) respectively. This fraction should be set as the [`denoising_end`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline.__call__.denoising_end) of the base model
and as the [`~StableDiffusionXLImg2ImgPipeline.__call__.denoising_start`] of the refiner model. and as the [`denoising_start`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline.__call__.denoising_start) of the refiner model.
Let's look at an example. Let's look at an example.
First, we import the two pipelines. Since the text encoders and variational autoencoder are the same First, we import the two pipelines. Since the text encoders and variational autoencoder are the same
...@@ -246,7 +246,7 @@ You can speed up inference by making use of `torch.compile`. This should give yo ...@@ -246,7 +246,7 @@ You can speed up inference by making use of `torch.compile`. This should give yo
+ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) + refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True)
``` ```
### Running with `torch` \< 2.0 ### Running with `torch < 2.0`
**Note** that if you want to run Stable Diffusion XL with `torch` < 2.0, please make sure to enable xformers **Note** that if you want to run Stable Diffusion XL with `torch` < 2.0, please make sure to enable xformers
attention: attention:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment