onnx.md 2.41 KB
Newer Older
Patrick von Platen's avatar
Patrick von Platen committed
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Nathan Lambert's avatar
Nathan Lambert committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

Patrick von Platen's avatar
Patrick von Platen committed
13

14
# How to use the ONNX Runtime for inference
Patrick von Platen's avatar
Patrick von Platen committed
15

16
🤗 [Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with ONNX Runtime. 
Patrick von Platen's avatar
Patrick von Platen committed
17

18
## Installation
Patrick von Platen's avatar
Patrick von Platen committed
19

20
21
22
23
24
Install 🤗 Optimum with the following command for ONNX Runtime support:

```
pip install optimum["onnxruntime"]
```
Patrick von Platen's avatar
Patrick von Platen committed
25

26
## Stable Diffusion Inference
Patrick von Platen's avatar
Patrick von Platen committed
27

28
29
To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load
a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
Patrick von Platen's avatar
Patrick von Platen committed
30

31
```python
32
from optimum.onnxruntime import ORTStableDiffusionPipeline
33

34
35
model_id = "runwayml/stable-diffusion-v1-5"
pipe = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
36
prompt = "a photo of an astronaut riding a horse on mars"
37
38
images = pipe(prompt).images[0]
pipe.save_pretrained("./onnx-stable-diffusion-v1-5")
39
```
Patrick von Platen's avatar
Patrick von Platen committed
40

41
42
If you want to export the pipeline in the ONNX format offline and later use it for inference,
you can use the [`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command: 
43

44
45
46
47
48
49
50
51
```bash
optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/
```

Then perform inference:

```python 
from optimum.onnxruntime import ORTStableDiffusionPipeline
52

53
54
model_id = "sd_v15_onnx"
pipe = ORTStableDiffusionPipeline.from_pretrained(model_id)
55
prompt = "a photo of an astronaut riding a horse on mars"
56
images = pipe(prompt).images[0]
57
58
```

59
60
61
62
Notice that we didn't have to specify `export=True` above.

You can find more examples in [optimum documentation](https://huggingface.co/docs/optimum/).

63
## Known Issues
Patrick von Platen's avatar
Patrick von Platen committed
64

65
- Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.