fp16.md 3.07 KB
Newer Older
Patrick von Platen's avatar
Patrick von Platen committed
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Nathan Lambert's avatar
Nathan Lambert committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# Speed up inference
Patrick von Platen's avatar
Patrick von Platen committed
14

15
There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either [xFormers](xformers) or `torch.nn.functional.scaled_dot_product_attention` in PyTorch 2.0 for their memory-efficient attention. 
16

17
18
19
20
21
22
23
<Tip>

In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the [Reduce memory usage](memory) guide.

</Tip>

The results below are obtained from generating a single 512x512 image from the prompt `a photo of an astronaut riding a horse on mars` with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect.
Patrick von Platen's avatar
Patrick von Platen committed
24

25
|                  | latency | speed-up |
26
| ---------------- | ------- | ------- |
27
| original         | 9.50s   | x1      |
28
29
| fp16             | 3.61s   | x2.63   |
| channels last    | 3.30s   | x2.88   |
30
| traced UNet      | 3.21s   | x2.96   |
31
| memory efficient attention  | 2.63s  | x3.61   |
32

33
## Use TensorFloat-32
34

35
On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (TF32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy.
36
37
38
39
40
41
42

```python
import torch

torch.backends.cuda.matmul.allow_tf32 = True
```

43
You can learn more about TF32 in the [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32) guide.
Pedro Cuenca's avatar
Pedro Cuenca committed
44

45
46
47
## Half-precision weights

To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16:
Pedro Cuenca's avatar
Pedro Cuenca committed
48
49

```Python
50
51
52
53
import torch
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
apolinario's avatar
apolinario committed
54
    "runwayml/stable-diffusion-v1-5",
Pedro Cuenca's avatar
Pedro Cuenca committed
55
    torch_dtype=torch.float16,
56
    use_safetensors=True,
Pedro Cuenca's avatar
Pedro Cuenca committed
57
)
58
59
60
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
61
image = pipe(prompt).images[0]
Patrick von Platen's avatar
Patrick von Platen committed
62
63
```

64
<Tip warning={true}>
Pedro Cuenca's avatar
Pedro Cuenca committed
65

66
Don't use [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than pure float16 precision.
Pedro Cuenca's avatar
Pedro Cuenca committed
67
  
68
</Tip>