mps.mdx 3.4 KB
Newer Older
Nathan Lambert's avatar
Nathan Lambert committed
1
2
3
4
5
6
7
8
9
10
11
12
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# How to use Stable Diffusion in Apple Silicon (M1/M2)
Patrick von Platen's avatar
Patrick von Platen committed
14

15
🤗 Diffusers is compatible with Apple silicon for Stable Diffusion inference, using the PyTorch `mps` device. These are the steps you need to follow to use your M1 or M2 computer with Stable Diffusion.
Patrick von Platen's avatar
Patrick von Platen committed
16

17
## Requirements
Patrick von Platen's avatar
Patrick von Platen committed
18

19
- Mac computer with Apple silicon (M1/M2) hardware.
20
- macOS 12.6 or later (13.0 or later recommended).
21
- arm64 version of Python.
22
23
24
25
26
- PyTorch 1.13.0 RC (Release Candidate). You can install it with `pip` using:

```
pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/test/cpu
```
Patrick von Platen's avatar
Patrick von Platen committed
27

28
29
30
31
32
33
34
35
36
## Inference Pipeline

The snippet below demonstrates how to use the `mps` backend using the familiar `to()` interface to move the Stable Diffusion pipeline to your M1 or M2 device.

We recommend to "prime" the pipeline using an additional one-time pass through it. This is a temporary workaround for a weird issue we have detected: the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and it's ok to use just one inference step and discard the result.

```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline
Patrick von Platen's avatar
Patrick von Platen committed
37

apolinario's avatar
apolinario committed
38
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
39
40
pipe = pipe.to("mps")

41
42
43
# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing()

44
45
46
47
48
49
50
51
prompt = "a photo of an astronaut riding a horse on mars"

# First-time "warmup" pass (see explanation above)
_ = pipe(prompt, num_inference_steps=1)

# Results match those from the CPU device after the warmup pass.
image = pipe(prompt).images[0]
```
Patrick von Platen's avatar
Patrick von Platen committed
52

53
## Performance Recommendations
Patrick von Platen's avatar
Patrick von Platen committed
54

55
M1/M2 performance is very sensitive to memory pressure. The system will automatically swap if it needs to, but performance will degrade significantly when it does.
Patrick von Platen's avatar
Patrick von Platen committed
56

57
We recommend you use _attention slicing_ to reduce memory pressure during inference and prevent swapping, particularly if your computer has lass than 64 GB of system RAM, or if you generate images at non-standard resolutions larger than 512 × 512 pixels. Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually has a performance impact of ~20% in computers without universal memory, but we have observed _better performance_ in most Apple Silicon computers, unless you have 64 GB or more.
Patrick von Platen's avatar
Patrick von Platen committed
58

59
60
61
```python
pipeline.enable_attention_slicing()
```
Patrick von Platen's avatar
Patrick von Platen committed
62

63
64
65
66
## Known Issues

- As mentioned above, we are investigating a strange [first-time inference issue](https://github.com/huggingface/diffusers/issues/372).
- Generating multiple prompts in a batch [crashes or doesn't work reliably](https://github.com/huggingface/diffusers/issues/363). We believe this is related to the [`mps` backend in PyTorch](https://github.com/pytorch/pytorch/issues/84039). For now, we recommend to iterate instead of batching.