onnx.md 3.51 KB
Newer Older
Patrick von Platen's avatar
Patrick von Platen committed
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Nathan Lambert's avatar
Nathan Lambert committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Patrick von Platen's avatar
Patrick von Platen committed
13

14
# ONNX Runtime
Patrick von Platen's avatar
Patrick von Platen committed
15

16
🤗 [Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with ONNX Runtime. You'll need to install 🤗 Optimum with the following command for ONNX Runtime support:
Patrick von Platen's avatar
Patrick von Platen committed
17

18
```bash
19
20
pip install optimum["onnxruntime"]
```
Patrick von Platen's avatar
Patrick von Platen committed
21

22
This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime.
Patrick von Platen's avatar
Patrick von Platen committed
23

24
## Stable Diffusion
25

26
To load and run inference, use the [`~optimum.onnxruntime.ORTStableDiffusionPipeline`]. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set `export=True`:
Patrick von Platen's avatar
Patrick von Platen committed
27

28
```python
29
from optimum.onnxruntime import ORTStableDiffusionPipeline
30

31
model_id = "runwayml/stable-diffusion-v1-5"
32
33
34
35
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
pipeline.save_pretrained("./onnx-stable-diffusion-v1-5")
36
```
Patrick von Platen's avatar
Patrick von Platen committed
37

38
39
40
41
42
43
44
45
<Tip warning={true}>

Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.

</Tip>

To export the pipeline in the ONNX format offline and use it later for inference,
use the [`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:
46

47
48
49
50
```bash
optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/
```

51
Then to perform inference (you don't have to specify `export=True` again):
52
53
54

```python 
from optimum.onnxruntime import ORTStableDiffusionPipeline
55

56
model_id = "sd_v15_onnx"
57
58
59
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
60
61
```

62
63
64
65
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/stable_diffusion_v1_5_ort_sail_boat.png">
</div>

66
You can find more examples in 🤗 Optimum [documentation](https://huggingface.co/docs/optimum/), and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting.
67
68
69

## Stable Diffusion XL

70
To load and run inference with SDXL, use the [`~optimum.onnxruntime.ORTStableDiffusionXLPipeline`]:
71
72
73
74

```python
from optimum.onnxruntime import ORTStableDiffusionXLPipeline

Ella Charlaix's avatar
Ella Charlaix committed
75
76
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
77
78
79
80
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
```

81
To export the pipeline in the ONNX format and use it later for inference, use the [`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:
82

83
84
85
```bash
optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/
```
Patrick von Platen's avatar
Patrick von Platen committed
86

87
SDXL in the ONNX format is supported for text-to-image and image-to-image.