Unverified Commit cd7071e7 authored by Steven Liu's avatar Steven Liu Committed by GitHub
Browse files

[docs] Add safetensors flag (#4245)

* add safetensors flag

* apply review
parent e31f38b5
...@@ -29,6 +29,7 @@ from diffusers import StableDiffusionInpaintPipeline ...@@ -29,6 +29,7 @@ from diffusers import StableDiffusionInpaintPipeline
pipeline = StableDiffusionInpaintPipeline.from_pretrained( pipeline = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting", "runwayml/stable-diffusion-inpainting",
torch_dtype=torch.float16, torch_dtype=torch.float16,
use_safetensors=True,
) )
pipeline = pipeline.to("cuda") pipeline = pipeline.to("cuda")
``` ```
......
...@@ -39,7 +39,7 @@ The [`DiffusionPipeline`] class is the simplest and most generic way to load any ...@@ -39,7 +39,7 @@ The [`DiffusionPipeline`] class is the simplest and most generic way to load any
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5" repo_id = "runwayml/stable-diffusion-v1-5"
pipe = DiffusionPipeline.from_pretrained(repo_id) pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
``` ```
You can also load a checkpoint with it's specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the [`StableDiffusionPipeline`] class: You can also load a checkpoint with it's specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the [`StableDiffusionPipeline`] class:
...@@ -48,7 +48,7 @@ You can also load a checkpoint with it's specific pipeline class. The example ab ...@@ -48,7 +48,7 @@ You can also load a checkpoint with it's specific pipeline class. The example ab
from diffusers import StableDiffusionPipeline from diffusers import StableDiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5" repo_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(repo_id) pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
``` ```
A checkpoint (such as [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) or [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with it's corresponding task-specific pipeline class: A checkpoint (such as [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) or [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with it's corresponding task-specific pipeline class:
...@@ -75,7 +75,7 @@ Then pass the local path to [`~DiffusionPipeline.from_pretrained`]: ...@@ -75,7 +75,7 @@ Then pass the local path to [`~DiffusionPipeline.from_pretrained`]:
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
repo_id = "./stable-diffusion-v1-5" repo_id = "./stable-diffusion-v1-5"
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
``` ```
The [`~DiffusionPipeline.from_pretrained`] method won't download any files from the Hub when it detects a local path, but this also means it won't download and cache the latest changes to a checkpoint. The [`~DiffusionPipeline.from_pretrained`] method won't download any files from the Hub when it detects a local path, but this also means it won't download and cache the latest changes to a checkpoint.
...@@ -94,7 +94,7 @@ To find out which schedulers are compatible for customization, you can use the ` ...@@ -94,7 +94,7 @@ To find out which schedulers are compatible for customization, you can use the `
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5" repo_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
stable_diffusion.scheduler.compatibles stable_diffusion.scheduler.compatibles
``` ```
...@@ -109,7 +109,7 @@ repo_id = "runwayml/stable-diffusion-v1-5" ...@@ -109,7 +109,7 @@ repo_id = "runwayml/stable-diffusion-v1-5"
scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler) stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True)
``` ```
### Safety checker ### Safety checker
...@@ -120,7 +120,7 @@ Diffusion models like Stable Diffusion can generate harmful content, which is wh ...@@ -120,7 +120,7 @@ Diffusion models like Stable Diffusion can generate harmful content, which is wh
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5" repo_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None) stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True)
``` ```
### Reuse components across pipelines ### Reuse components across pipelines
...@@ -131,7 +131,7 @@ You can also reuse the same components in multiple pipelines to avoid loading th ...@@ -131,7 +131,7 @@ You can also reuse the same components in multiple pipelines to avoid loading th
from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline
model_id = "runwayml/stable-diffusion-v1-5" model_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
components = stable_diffusion_txt2img.components components = stable_diffusion_txt2img.components
``` ```
...@@ -148,7 +148,7 @@ You can also pass the components individually to the pipeline if you want more f ...@@ -148,7 +148,7 @@ You can also pass the components individually to the pipeline if you want more f
from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline
model_id = "runwayml/stable-diffusion-v1-5" model_id = "runwayml/stable-diffusion-v1-5"
stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True)
stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(
vae=stable_diffusion_txt2img.vae, vae=stable_diffusion_txt2img.vae,
text_encoder=stable_diffusion_txt2img.text_encoder, text_encoder=stable_diffusion_txt2img.text_encoder,
...@@ -194,10 +194,12 @@ import torch ...@@ -194,10 +194,12 @@ import torch
# load fp16 variant # load fp16 variant
stable_diffusion = DiffusionPipeline.from_pretrained( stable_diffusion = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16 "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
) )
# load non_ema variant # load non_ema variant
stable_diffusion = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") stable_diffusion = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True
)
``` ```
To save a checkpoint stored in a different floating point type or as a non-EMA variant, use the [`DiffusionPipeline.save_pretrained`] method and specify the `variant` argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: To save a checkpoint stored in a different floating point type or as a non-EMA variant, use the [`DiffusionPipeline.save_pretrained`] method and specify the `variant` argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder:
...@@ -215,10 +217,12 @@ If you don't save the variant to an existing folder, you must specify the `varia ...@@ -215,10 +217,12 @@ If you don't save the variant to an existing folder, you must specify the `varia
```python ```python
# 👎 this won't work # 👎 this won't work
stable_diffusion = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", torch_dtype=torch.float16) stable_diffusion = DiffusionPipeline.from_pretrained(
"./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
# 👍 this works # 👍 this works
stable_diffusion = DiffusionPipeline.from_pretrained( stable_diffusion = DiffusionPipeline.from_pretrained(
"./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16 "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True
) )
``` ```
...@@ -233,7 +237,7 @@ load model variants, e.g.: ...@@ -233,7 +237,7 @@ load model variants, e.g.:
```python ```python
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16") pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", use_safetensors=True)
``` ```
However, this behavior is now deprecated since the "revision" argument should (just as it's done in GitHub) better be used to load model checkpoints from a specific commit or branch in development. However, this behavior is now deprecated since the "revision" argument should (just as it's done in GitHub) better be used to load model checkpoints from a specific commit or branch in development.
...@@ -259,7 +263,7 @@ Models can be loaded from a subfolder with the `subfolder` argument. For example ...@@ -259,7 +263,7 @@ Models can be loaded from a subfolder with the `subfolder` argument. For example
from diffusers import UNet2DConditionModel from diffusers import UNet2DConditionModel
repo_id = "runwayml/stable-diffusion-v1-5" repo_id = "runwayml/stable-diffusion-v1-5"
model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet") model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True)
``` ```
Or directly from a repository's [directory](https://huggingface.co/google/ddpm-cifar10-32/tree/main): Or directly from a repository's [directory](https://huggingface.co/google/ddpm-cifar10-32/tree/main):
...@@ -268,7 +272,7 @@ Or directly from a repository's [directory](https://huggingface.co/google/ddpm-c ...@@ -268,7 +272,7 @@ Or directly from a repository's [directory](https://huggingface.co/google/ddpm-c
from diffusers import UNet2DModel from diffusers import UNet2DModel
repo_id = "google/ddpm-cifar10-32" repo_id = "google/ddpm-cifar10-32"
model = UNet2DModel.from_pretrained(repo_id) model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True)
``` ```
You can also load and save model variants by specifying the `variant` argument in [`ModelMixin.from_pretrained`] and [`ModelMixin.save_pretrained`]: You can also load and save model variants by specifying the `variant` argument in [`ModelMixin.from_pretrained`] and [`ModelMixin.save_pretrained`]:
...@@ -276,7 +280,9 @@ You can also load and save model variants by specifying the `variant` argument i ...@@ -276,7 +280,9 @@ You can also load and save model variants by specifying the `variant` argument i
```python ```python
from diffusers import UNet2DConditionModel from diffusers import UNet2DConditionModel
model = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non-ema") model = UNet2DConditionModel.from_pretrained(
"runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non-ema", use_safetensors=True
)
model.save_pretrained("./local-unet", variant="non-ema") model.save_pretrained("./local-unet", variant="non-ema")
``` ```
...@@ -310,7 +316,7 @@ euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") ...@@ -310,7 +316,7 @@ euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler")
dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler")
# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` # replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler`
pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm) pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True)
``` ```
## DiffusionPipeline explained ## DiffusionPipeline explained
...@@ -326,7 +332,7 @@ The pipelines underlying folder structure corresponds directly with their class ...@@ -326,7 +332,7 @@ The pipelines underlying folder structure corresponds directly with their class
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5" repo_id = "runwayml/stable-diffusion-v1-5"
pipeline = DiffusionPipeline.from_pretrained(repo_id) pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True)
print(pipeline) print(pipeline)
``` ```
......
...@@ -111,7 +111,9 @@ If you prefer to run inference with code, click on the **Use in Diffusers** butt ...@@ -111,7 +111,9 @@ If you prefer to run inference with code, click on the **Use in Diffusers** butt
```py ```py
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") pipeline = DiffusionPipeline.from_pretrained(
"sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True
)
``` ```
Then you can generate an image like: Then you can generate an image like:
...@@ -119,7 +121,9 @@ Then you can generate an image like: ...@@ -119,7 +121,9 @@ Then you can generate an image like:
```py ```py
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") pipeline = DiffusionPipeline.from_pretrained(
"sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True
)
pipeline.to("cuda") pipeline.to("cuda")
placeholder_token = "<my-funny-cat-token>" placeholder_token = "<my-funny-cat-token>"
......
...@@ -40,7 +40,7 @@ import numpy as np ...@@ -40,7 +40,7 @@ import numpy as np
model_id = "google/ddpm-cifar10-32" model_id = "google/ddpm-cifar10-32"
# load model and scheduler # load model and scheduler
ddim = DDIMPipeline.from_pretrained(model_id) ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True)
# run pipeline for just two steps and return numpy tensor # run pipeline for just two steps and return numpy tensor
image = ddim(num_inference_steps=2, output_type="np").images image = ddim(num_inference_steps=2, output_type="np").images
...@@ -65,7 +65,7 @@ import numpy as np ...@@ -65,7 +65,7 @@ import numpy as np
model_id = "google/ddpm-cifar10-32" model_id = "google/ddpm-cifar10-32"
# load model and scheduler # load model and scheduler
ddim = DDIMPipeline.from_pretrained(model_id) ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True)
# create a generator for reproducibility # create a generator for reproducibility
generator = torch.Generator(device="cpu").manual_seed(0) generator = torch.Generator(device="cpu").manual_seed(0)
...@@ -100,7 +100,7 @@ import numpy as np ...@@ -100,7 +100,7 @@ import numpy as np
model_id = "google/ddpm-cifar10-32" model_id = "google/ddpm-cifar10-32"
# load model and scheduler # load model and scheduler
ddim = DDIMPipeline.from_pretrained(model_id) ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True)
ddim.to("cuda") ddim.to("cuda")
# create a generator for reproducibility # create a generator for reproducibility
...@@ -125,7 +125,7 @@ import numpy as np ...@@ -125,7 +125,7 @@ import numpy as np
model_id = "google/ddpm-cifar10-32" model_id = "google/ddpm-cifar10-32"
# load model and scheduler # load model and scheduler
ddim = DDIMPipeline.from_pretrained(model_id) ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True)
ddim.to("cuda") ddim.to("cuda")
# create a generator for reproducibility; notice you don't place it on the GPU! # create a generator for reproducibility; notice you don't place it on the GPU!
...@@ -174,7 +174,7 @@ from diffusers import DDIMScheduler, StableDiffusionPipeline ...@@ -174,7 +174,7 @@ from diffusers import DDIMScheduler, StableDiffusionPipeline
import numpy as np import numpy as np
model_id = "runwayml/stable-diffusion-v1-5" model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id).to("cuda") pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda")
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
g = torch.Generator(device="cuda") g = torch.Generator(device="cuda")
......
...@@ -27,7 +27,9 @@ Instantiate a pipeline with [`DiffusionPipeline.from_pretrained`] and place it o ...@@ -27,7 +27,9 @@ Instantiate a pipeline with [`DiffusionPipeline.from_pretrained`] and place it o
```python ```python
>>> from diffusers import DiffusionPipeline >>> from diffusers import DiffusionPipeline
>>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) >>> pipe = DiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
... )
>>> pipe = pipe.to("cuda") >>> pipe = pipe.to("cuda")
``` ```
......
...@@ -39,7 +39,9 @@ import torch ...@@ -39,7 +39,9 @@ import torch
login() login()
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
``` ```
Next, we move it to GPU: Next, we move it to GPU:
......
...@@ -49,7 +49,9 @@ repo_id_embeds = "sd-concepts-library/cat-toy" ...@@ -49,7 +49,9 @@ repo_id_embeds = "sd-concepts-library/cat-toy"
Now you can load a pipeline, and pass the pre-learned concept to it: Now you can load a pipeline, and pass the pre-learned concept to it:
```py ```py
pipeline = StableDiffusionPipeline.from_pretrained(pretrained_model_name_or_path, torch_dtype=torch.float16).to("cuda") pipeline = StableDiffusionPipeline.from_pretrained(
pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
pipeline.load_textual_inversion(repo_id_embeds) pipeline.load_textual_inversion(repo_id_embeds)
``` ```
......
...@@ -32,7 +32,7 @@ In this guide, you'll use [`DiffusionPipeline`] for unconditional image generati ...@@ -32,7 +32,7 @@ In this guide, you'll use [`DiffusionPipeline`] for unconditional image generati
```python ```python
>>> from diffusers import DiffusionPipeline >>> from diffusers import DiffusionPipeline
>>> generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128") >>> generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128", use_safetensors=True)
``` ```
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components. The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
......
...@@ -40,7 +40,9 @@ You can use the model with the new `.safetensors` weights by specifying the refe ...@@ -40,7 +40,9 @@ You can use the model with the new `.safetensors` weights by specifying the refe
```py ```py
from diffusers import DiffusionPipeline from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22") pipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True
)
``` ```
## Why use safetensors? ## Why use safetensors?
...@@ -55,7 +57,7 @@ There are several reasons for using safetensors: ...@@ -55,7 +57,7 @@ There are several reasons for using safetensors:
```py ```py
from diffusers import StableDiffusionPipeline from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1") pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True)
"Loaded in safetensors 0:00:02.033658" "Loaded in safetensors 0:00:02.033658"
"Loaded in PyTorch 0:00:02.663379" "Loaded in PyTorch 0:00:02.663379"
``` ```
......
...@@ -25,7 +25,7 @@ A pipeline is a quick and easy way to run a model for inference, requiring no mo ...@@ -25,7 +25,7 @@ A pipeline is a quick and easy way to run a model for inference, requiring no mo
```py ```py
>>> from diffusers import DDPMPipeline >>> from diffusers import DDPMPipeline
>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256").to("cuda") >>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda")
>>> image = ddpm(num_inference_steps=25).images[0] >>> image = ddpm(num_inference_steps=25).images[0]
>>> image >>> image
``` ```
...@@ -46,7 +46,7 @@ To recreate the pipeline with the model and scheduler separately, let's write ou ...@@ -46,7 +46,7 @@ To recreate the pipeline with the model and scheduler separately, let's write ou
>>> from diffusers import DDPMScheduler, UNet2DModel >>> from diffusers import DDPMScheduler, UNet2DModel
>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") >>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda") >>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda")
``` ```
2. Set the number of timesteps to run the denoising process for: 2. Set the number of timesteps to run the denoising process for:
...@@ -124,10 +124,14 @@ Now that you know what you need for the Stable Diffusion pipeline, load all thes ...@@ -124,10 +124,14 @@ Now that you know what you need for the Stable Diffusion pipeline, load all thes
>>> from transformers import CLIPTextModel, CLIPTokenizer >>> from transformers import CLIPTextModel, CLIPTokenizer
>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler >>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler
>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae") >>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True)
>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") >>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer")
>>> text_encoder = CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder") >>> text_encoder = CLIPTextModel.from_pretrained(
>>> unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet") ... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True
... )
>>> unet = UNet2DConditionModel.from_pretrained(
... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True
... )
``` ```
Instead of the default [`PNDMScheduler`], exchange it for the [`UniPCMultistepScheduler`] to see how easy it is to plug a different scheduler in: Instead of the default [`PNDMScheduler`], exchange it for the [`UniPCMultistepScheduler`] to see how easy it is to plug a different scheduler in:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment