Unverified Commit f7cc9adc authored by Steven Liu's avatar Steven Liu Committed by GitHub
Browse files

[docs] Zero SNR (#3776)

* add zero snr doc

* fix image link

* apply feedback

* separate page
parent 59aefe9e
...@@ -50,6 +50,8 @@ ...@@ -50,6 +50,8 @@
title: Distributed inference with multiple GPUs title: Distributed inference with multiple GPUs
- local: using-diffusers/reusing_seeds - local: using-diffusers/reusing_seeds
title: Improve image quality with deterministic generation title: Improve image quality with deterministic generation
- local: using-diffusers/control_brightness
title: Control image brightness
- local: using-diffusers/reproducibility - local: using-diffusers/reproducibility
title: Create reproducible pipelines title: Create reproducible pipelines
- local: using-diffusers/custom_pipeline_examples - local: using-diffusers/custom_pipeline_examples
......
...@@ -101,7 +101,7 @@ Continue fine-tuning a checkpoint with [`train_text_to_image.py`](https://github ...@@ -101,7 +101,7 @@ Continue fine-tuning a checkpoint with [`train_text_to_image.py`](https://github
and `--prediction_type="v_prediction"`. and `--prediction_type="v_prediction"`.
- (3) change the sampler to always start from the last timestep; - (3) change the sampler to always start from the last timestep;
```py ```py
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_scaling="trailing") pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
``` ```
- (4) rescale classifier-free guidance to prevent over-exposure. - (4) rescale classifier-free guidance to prevent over-exposure.
```py ```py
...@@ -118,7 +118,7 @@ from diffusers import DiffusionPipeline, DDIMScheduler ...@@ -118,7 +118,7 @@ from diffusers import DiffusionPipeline, DDIMScheduler
pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
pipe.scheduler = DDIMScheduler.from_config( pipe.scheduler = DDIMScheduler.from_config(
pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_scaling="trailing" pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
) )
pipe.to("cuda") pipe.to("cuda")
......
...@@ -59,7 +59,7 @@ Continue fine-tuning a checkpoint with [`train_text_to_image.py`](https://github ...@@ -59,7 +59,7 @@ Continue fine-tuning a checkpoint with [`train_text_to_image.py`](https://github
and `--prediction_type="v_prediction"`. and `--prediction_type="v_prediction"`.
- (3) change the sampler to always start from the last timestep; - (3) change the sampler to always start from the last timestep;
```py ```py
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_scaling="trailing") pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
``` ```
- (4) rescale classifier-free guidance to prevent over-exposure. - (4) rescale classifier-free guidance to prevent over-exposure.
```py ```py
...@@ -76,7 +76,7 @@ from diffusers import DiffusionPipeline, DDIMScheduler ...@@ -76,7 +76,7 @@ from diffusers import DiffusionPipeline, DDIMScheduler
pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
pipe.scheduler = DDIMScheduler.from_config( pipe.scheduler = DDIMScheduler.from_config(
pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_scaling="trailing" pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
) )
pipe.to("cuda") pipe.to("cuda")
......
# Control image brightness
The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) paper. The solutions proposed in the paper are currently implemented in the [`DDIMScheduler`] which you can use to improve the lighting in your images.
<Tip>
💡 Take a look at the paper linked above for more details about the proposed solutions!
</Tip>
One of the solutions is to train a model with *v prediction* and *v loss*. Add the following flag to the [`train_text_to_image.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [`train_text_to_image_lora.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts to enable `v_prediction`:
```bash
--prediction_type="v_prediction"
```
For example, let's use the [`ptx0/pseudo-journey-v2`](https://huggingface.co/ptx0/pseudo-journey-v2) checkpoint which has been finetuned with `v_prediction`.
Next, configure the following parameters in the [`DDIMScheduler`]:
1. `rescale_betas_zero_snr=True`, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR)
2. `timestep_spacing="trailing"`, starts sampling from the last timestep
```py
>>> from diffusers import DiffusionPipeline, DDIMScheduler
>>> pipeline = DiffusioPipeline.from_pretrained("ptx0/pseudo-journey-v2")
# switch the scheduler in the pipeline to use the DDIMScheduler
>>> pipeline.scheduler = DDIMScheduler.from_config(
... pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
... )
>>> pipeline.to("cuda")
```
Finally, in your call to the pipeline, set `guidance_rescale` to prevent overexposure:
```py
prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
image = pipeline(prompt, guidance_rescale=0.7).images[0]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/zero_snr.png"/>
</div>
\ No newline at end of file
...@@ -392,7 +392,7 @@ class StableDiffusion2VPredictionPipelineIntegrationTests(unittest.TestCase): ...@@ -392,7 +392,7 @@ class StableDiffusion2VPredictionPipelineIntegrationTests(unittest.TestCase):
pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1") pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
pipe.scheduler = DDIMScheduler.from_config( pipe.scheduler = DDIMScheduler.from_config(
pipe.scheduler.config, timestep_scaling="trailing", rescale_betas_zero_snr=True pipe.scheduler.config, timestep_spacing="trailing", rescale_betas_zero_snr=True
) )
pipe.to(torch_device) pipe.to(torch_device)
pipe.enable_attention_slicing() pipe.enable_attention_slicing()
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment