gpu.mdx 10.7 KB
Newer Older
1
2
3
4
---
title: Hardware support
---

5
## Nvidia
6
Ollama supports Nvidia GPUs with compute capability 5.0+ and driver version 531 and newer.
7
8
9
10

Check your compute compatibility to see if your card is supported:
[https://developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus)

11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| Compute Capability | Family              | Cards                                                                                                                          |
| ------------------ | ------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| 12.0               | GeForce RTX 50xx    | `RTX 5060` `RTX 5060 Ti` `RTX 5070` `RTX 5070 Ti` `RTX 5080` `RTX 5090`                                                        |
|                    | NVIDIA Professional | `RTX PRO 4000 Blackwell` `RTX PRO 4500 Blackwell` `RTX PRO 5000 Blackwell` `RTX PRO 6000 Blackwell`                            |
| 9.0                | NVIDIA              | `H200` `H100`                                                                                                                  |
| 8.9                | GeForce RTX 40xx    | `RTX 4090` `RTX 4080 SUPER` `RTX 4080` `RTX 4070 Ti SUPER` `RTX 4070 Ti` `RTX 4070 SUPER` `RTX 4070` `RTX 4060 Ti` `RTX 4060`  |
|                    | NVIDIA Professional | `L4` `L40` `RTX 6000`                                                                                                          |
| 8.6                | GeForce RTX 30xx    | `RTX 3090 Ti` `RTX 3090` `RTX 3080 Ti` `RTX 3080` `RTX 3070 Ti` `RTX 3070` `RTX 3060 Ti` `RTX 3060` `RTX 3050 Ti` `RTX 3050`   |
|                    | NVIDIA Professional | `A40` `RTX A6000` `RTX A5000` `RTX A4000` `RTX A3000` `RTX A2000` `A10` `A16` `A2`                                             |
| 8.0                | NVIDIA              | `A100` `A30`                                                                                                                   |
| 7.5                | GeForce GTX/RTX     | `GTX 1650 Ti` `TITAN RTX` `RTX 2080 Ti` `RTX 2080` `RTX 2070` `RTX 2060`                                                       |
|                    | NVIDIA Professional | `T4` `RTX 5000` `RTX 4000` `RTX 3000` `T2000` `T1200` `T1000` `T600` `T500`                                                    |
|                    | Quadro              | `RTX 8000` `RTX 6000` `RTX 5000` `RTX 4000`                                                                                    |
| 7.0                | NVIDIA              | `TITAN V` `V100` `Quadro GV100`                                                                                                |
| 6.1                | NVIDIA TITAN        | `TITAN Xp` `TITAN X`                                                                                                           |
|                    | GeForce GTX         | `GTX 1080 Ti` `GTX 1080` `GTX 1070 Ti` `GTX 1070` `GTX 1060` `GTX 1050 Ti` `GTX 1050`                                          |
|                    | Quadro              | `P6000` `P5200` `P4200` `P3200` `P5000` `P4000` `P3000` `P2200` `P2000` `P1000` `P620` `P600` `P500` `P520`                    |
|                    | Tesla               | `P40` `P4`                                                                                                                     |
| 6.0                | NVIDIA              | `Tesla P100` `Quadro GP100`                                                                                                    |
| 5.2                | GeForce GTX         | `GTX TITAN X` `GTX 980 Ti` `GTX 980` `GTX 970` `GTX 960` `GTX 950`                                                             |
|                    | Quadro              | `M6000 24GB` `M6000` `M5000` `M5500M` `M4000` `M2200` `M2000` `M620`                                                           |
|                    | Tesla               | `M60` `M40`                                                                                                                    |
| 5.0                | GeForce GTX         | `GTX 750 Ti` `GTX 750` `NVS 810`                                                                                               |
|                    | Quadro              | `K2200` `K1200` `K620` `M1200` `M520` `M5000M` `M4000M` `M3000M` `M2000M` `M1000M` `K620M` `M600M` `M500M`                     |
35

36
For building locally to support older GPUs, see [developer](./development#linux-cuda-nvidia)
37

38
39
40
41
42
43
44
45
### GPU Selection

If you have multiple NVIDIA GPUs in your system and want to limit Ollama to use
a subset, you can set `CUDA_VISIBLE_DEVICES` to a comma separated list of GPUs.
Numeric IDs may be used, however ordering may vary, so UUIDs are more reliable.
You can discover the UUID of your GPUs by running `nvidia-smi -L` If you want to
ignore the GPUs and force CPU usage, use an invalid GPU ID (e.g., "-1")

46
### Linux Suspend Resume
47
48

On linux, after a suspend/resume cycle, sometimes Ollama will fail to discover
49
your NVIDIA GPU, and fallback to running on the CPU. You can workaround this
50
51
52
driver bug by reloading the NVIDIA UVM driver with `sudo rmmod nvidia_uvm &&
sudo modprobe nvidia_uvm`

53
## AMD Radeon
54

55
56
Ollama supports the following AMD GPUs via the ROCm library:

57
> **NOTE:**
58
59
> Additional AMD GPU support is provided by the Vulkan Library - see below.

Daniel Hiltgen's avatar
Daniel Hiltgen committed
60
61
62

### Linux Support

63
| Family         | Cards and accelerators                                                                                                                         |
Daniel Hiltgen's avatar
Daniel Hiltgen committed
64
| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
65
66
67
| AMD Radeon RX  | `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7600 XT` `7600` `6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800` `Vega 64`                  |
| AMD Radeon PRO | `W7900` `W7800` `W7700` `W7600` `W7500` `W6900X` `W6800X Duo` `W6800X` `W6800` `V620` `V420` `V340` `V320` `Vega II Duo` `Vega II` `SSG`       |
| AMD Instinct   | `MI300X` `MI300A` `MI300` `MI250X` `MI250` `MI210` `MI200` `MI100` `MI60`                                                                      |
Daniel Hiltgen's avatar
Daniel Hiltgen committed
68

69
70
71
### Windows Support

With ROCm v6.1, the following GPUs are supported on Windows.
72

73
74
75
76
| Family         | Cards and accelerators                                                                                               |
| -------------- | -------------------------------------------------------------------------------------------------------------------- |
| AMD Radeon RX  | `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7600 XT` `7600` `6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800`  |
| AMD Radeon PRO | `W7900` `W7800` `W7700` `W7600` `W7500` `W6900X` `W6800X Duo` `W6800X` `W6800` `V620`                                |
Daniel Hiltgen's avatar
Daniel Hiltgen committed
77
78

### Overrides on Linux
79

80
81
Ollama leverages the AMD ROCm library, which does not support all AMD GPUs. In
some cases you can force the system to try to use a similar LLVM target that is
82
close. For example The Radeon RX 5400 is `gfx1034` (also known as 10.3.4)
83
however, ROCm does not currently support this target. The closest support is
84
85
`gfx1030`. You can use the environment variable `HSA_OVERRIDE_GFX_VERSION` with
`x.y.z` syntax. So for example, to force the system to run on the RX 5400, you
86
would set `HSA_OVERRIDE_GFX_VERSION="10.3.0"` as an environment variable for the
87
server. If you have an unsupported AMD GPU you can experiment using the list of
88
89
supported types below.

90
If you have multiple GPUs with different GFX versions, append the numeric device
91
92
number to the environment variable to set them individually. For example,
`HSA_OVERRIDE_GFX_VERSION_0=10.3.0` and `HSA_OVERRIDE_GFX_VERSION_1=11.0.0`
93

Daniel Hiltgen's avatar
Daniel Hiltgen committed
94
At this time, the known supported GPU types on linux are the following LLVM Targets.
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
This table shows some example GPUs that map to these LLVM targets:
| **LLVM Target** | **An Example GPU** |
|-----------------|---------------------|
| gfx908 | Radeon Instinct MI100 |
| gfx90a | Radeon Instinct MI210 |
| gfx940 | Radeon Instinct MI300 |
| gfx941 | |
| gfx942 | |
| gfx1030 | Radeon PRO V620 |
| gfx1100 | Radeon PRO W7900 |
| gfx1101 | Radeon PRO W7700 |
| gfx1102 | Radeon RX 7600 |

AMD is working on enhancing ROCm v6 to broaden support for families of GPUs in a
future release which should increase support for more GPUs.

Reach out on [Discord](https://discord.gg/ollama) or file an
[issue](https://github.com/ollama/ollama/issues) for additional help.

114
115
116
### GPU Selection

If you have multiple AMD GPUs in your system and want to limit Ollama to use a
117
subset, you can set `ROCR_VISIBLE_DEVICES` to a comma separated list of GPUs.
118
119
You can see the list of devices with `rocminfo`. If you want to ignore the GPUs
and force CPU usage, use an invalid GPU ID (e.g., "-1"). When available, use the
120
`Uuid` to uniquely identify the device instead of numeric value.
121
122
123
124

### Container Permission

In some Linux distributions, SELinux can prevent containers from
125
accessing the AMD GPU devices. On the host system you can run
126
127
`sudo setsebool container_use_devices=1` to allow containers to use devices.

128
## Metal (Apple GPUs)
129

130
Ollama supports GPU acceleration on Apple devices via the Metal API.
131
132
133
134


## Vulkan GPU Support

135
> **NOTE:**
136
> Vulkan is currently an Experimental feature.  To enable, you must set OLLAMA_VULKAN=1 for the Ollama server as
137
described in the [FAQ](faq#how-do-i-configure-ollama-server)
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163

Additional GPU support on Windows and Linux is provided via
[Vulkan](https://www.vulkan.org/). On Windows most GPU vendors drivers come
bundled with Vulkan support and require no additional setup steps. Most Linux
distributions require installing additional components, and you may have
multiple options for Vulkan drivers between Mesa and GPU Vendor specific packages

- Linux Intel GPU Instructions - https://dgpu-docs.intel.com/driver/client/overview.html
- Linux AMD GPU Instructions - https://amdgpu-install.readthedocs.io/en/latest/install-script.html#specifying-a-vulkan-implementation

For AMD GPUs on some Linux distributions, you may need to add the `ollama` user to the `render` group.

The Ollama scheduler leverages available VRAM data reported by the GPU libraries to
make optimal scheduling decisions.  Vulkan requires additional capabilities or
running as root to expose this available VRAM data.  If neither root access or this
capability are granted, Ollama will use approximate sizes of the models
to make best effort scheduling decisions.

```bash
sudo setcap cap_perfmon+ep /usr/local/bin/ollama
```

### GPU Selection

To select specific Vulkan GPU(s), you can set the environment variable
`GGML_VK_VISIBLE_DEVICES` to one or more numeric IDs on the Ollama server as
164
described in the [FAQ](faq#how-do-i-configure-ollama-server). If you
165
166
encounter any problems with Vulkan based GPUs, you can disable all Vulkan GPUs
by setting `GGML_VK_VISIBLE_DEVICES=-1`