faq.mdx 16.7 KB
Newer Older
1
2
3
---
title: FAQ
---
4

Matt Williams's avatar
Matt Williams committed
5
## How can I upgrade Ollama?
6

Jeffrey Morgan's avatar
Jeffrey Morgan committed
7
8
9
10
Ollama on macOS and Windows will automatically download updates. Click on the taskbar or menubar item and then click "Restart to update" to apply the update. Updates can also be installed by downloading the latest version [manually](https://ollama.com/download/).

On Linux, re-run the install script:

11
```shell
Jeffrey Morgan's avatar
Jeffrey Morgan committed
12
13
curl -fsSL https://ollama.com/install.sh | sh
```
14

Matt Williams's avatar
Matt Williams committed
15
## How can I view the logs?
16

Matt Williams's avatar
Matt Williams committed
17
Review the [Troubleshooting](./troubleshooting.md) docs for more about using logs.
18

19
20
21
22
## Is my GPU compatible with Ollama?

Please refer to the [GPU docs](./gpu.md).

Jeffrey Morgan's avatar
Jeffrey Morgan committed
23
24
## How can I specify the context window size?

25
By default, Ollama uses a context window size of 2048 tokens.
26

27
This can be overridden with the `OLLAMA_CONTEXT_LENGTH` environment variable. For example, to set the default context window to 8K, use:
28
29
30
31

```shell
OLLAMA_CONTEXT_LENGTH=8192 ollama serve
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
32
33
34

To change this when using `ollama run`, use `/set parameter`:

35
```shell
36
/set parameter num_ctx 4096
Jeffrey Morgan's avatar
Jeffrey Morgan committed
37
38
39
40
```

When using the API, specify the `num_ctx` parameter:

41
```shell
Jeffrey Morgan's avatar
Jeffrey Morgan committed
42
curl http://localhost:11434/api/generate -d '{
43
  "model": "llama3.2",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
44
45
  "prompt": "Why is the sky blue?",
  "options": {
46
    "num_ctx": 4096
Jeffrey Morgan's avatar
Jeffrey Morgan committed
47
48
49
50
  }
}'
```

51
52
53
54
55
56
57
58
## How can I tell if my model was loaded onto the GPU?

Use the `ollama ps` command to see what models are currently loaded into memory.

```shell
ollama ps
```

59
60
61
62
<Info>
  **Output**: ``` NAME ID SIZE PROCESSOR UNTIL llama3:70b bcfb190ca3a7 42 GB
  100% GPU 4 minutes from now ```
</Info>
63

64
The `Processor` column will show which memory the model was loaded in to:
65
66
67
68

- `100% GPU` means the model was loaded entirely into the GPU
- `100% CPU` means the model was loaded entirely in system memory
- `48%/52% CPU/GPU` means the model was loaded partially onto both the GPU and into system memory
69

70
## How do I configure Ollama server?
71

72
Ollama server can be configured with environment variables.
73

74
### Setting environment variables on Mac
75

76
If Ollama is run as a macOS application, environment variables should be set using `launchctl`:
Michael Yang's avatar
Michael Yang committed
77

78
1. For each environment variable, call `launchctl setenv`.
Michael Yang's avatar
Michael Yang committed
79

80
81
82
   ```bash
   launchctl setenv OLLAMA_HOST "0.0.0.0:11434"
   ```
83

84
2. Restart Ollama application.
Michael Yang's avatar
Michael Yang committed
85

86
### Setting environment variables on Linux
Michael Yang's avatar
Michael Yang committed
87

88
If Ollama is run as a systemd service, environment variables should be set using `systemctl`:
Michael Yang's avatar
Michael Yang committed
89

90
1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
91

92
93
2. For each environment variable, add a line `Environment` under section `[Service]`:

94
95
96
97
   ```ini
   [Service]
   Environment="OLLAMA_HOST=0.0.0.0:11434"
   ```
98
99
100
101

3. Save and exit.

4. Reload `systemd` and restart Ollama:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
102

103
   ```shell
Matt Williams's avatar
Matt Williams committed
104
105
106
   systemctl daemon-reload
   systemctl restart ollama
   ```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
107

108
109
### Setting environment variables on Windows

110
On Windows, Ollama inherits your user and system environment variables.
111

112
1. First Quit Ollama by clicking on it in the task bar.
113

114
2. Start the Settings (Windows 11) or Control Panel (Windows 10) application and search for _environment variables_.
115

116
3. Click on _Edit environment variables for your account_.
117

118
4. Edit or create a new variable for your user account for `OLLAMA_HOST`, `OLLAMA_MODELS`, etc.
119

120
121
122
5. Click OK/Apply to save.

6. Start the Ollama application from the Windows Start menu.
123

124
125
## How do I use Ollama behind a proxy?

Michael Yang's avatar
Michael Yang committed
126
127
Ollama pulls models from the Internet and may require a proxy server to access the models. Use `HTTPS_PROXY` to redirect outbound requests through the proxy. Ensure the proxy certificate is installed as a system certificate. Refer to the section above for how to use environment variables on your platform.

128
129
130
131
<Note>
  Avoid setting `HTTP_PROXY`. Ollama does not use HTTP for model pulls, only
  HTTPS. Setting `HTTP_PROXY` may interrupt client connections to the server.
</Note>
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153

### How do I use Ollama behind a proxy in Docker?

The Ollama Docker container image can be configured to use a proxy by passing `-e HTTPS_PROXY=https://proxy.example.com` when starting the container.

Alternatively, the Docker daemon can be configured to use a proxy. Instructions are available for Docker Desktop on [macOS](https://docs.docker.com/desktop/settings/mac/#proxies), [Windows](https://docs.docker.com/desktop/settings/windows/#proxies), and [Linux](https://docs.docker.com/desktop/settings/linux/#proxies), and Docker [daemon with systemd](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy).

Ensure the certificate is installed as a system certificate when using HTTPS. This may require a new Docker image when using a self-signed certificate.

```dockerfile
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
RUN update-ca-certificates
```

Build and run this image:

```shell
docker build -t ollama-with-ca .
docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama-with-ca
```

154
## Does Ollama send my prompts and answers back to ollama.com?
Patrick Devine's avatar
Patrick Devine committed
155

156
No. Ollama runs locally, and conversation data does not leave your machine.
157

Matt Williams's avatar
Matt Williams committed
158
## How can I expose Ollama on my network?
Jeffrey Morgan's avatar
Jeffrey Morgan committed
159

160
Ollama binds 127.0.0.1 port 11434 by default. Change the bind address with the `OLLAMA_HOST` environment variable.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
161

162
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
163

jmorganca's avatar
jmorganca committed
164
165
## How can I use Ollama with a proxy server?

Jeffrey Morgan's avatar
Jeffrey Morgan committed
166
Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers (if not exposing Ollama on the network). For example, with Nginx:
jmorganca's avatar
jmorganca committed
167

168
```nginx
jmorganca's avatar
jmorganca committed
169
170
171
172
173
174
175
176
177
178
179
180
181
182
server {
    listen 80;
    server_name example.com;  # Replace with your domain or IP
    location / {
        proxy_pass http://localhost:11434;
        proxy_set_header Host localhost:11434;
    }
}
```

## How can I use Ollama with ngrok?

Ollama can be accessed using a range of tools for tunneling tools. For example with Ngrok:

183
```shell
jmorganca's avatar
jmorganca committed
184
185
186
ngrok http 11434 --host-header="localhost:11434"
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
187
188
189
190
## How can I use Ollama with Cloudflare Tunnel?

To use Ollama with Cloudflare Tunnel, use the `--url` and `--http-host-header` flags:

191
```shell
Jeffrey Morgan's avatar
Jeffrey Morgan committed
192
193
194
cloudflared tunnel --url http://localhost:11434 --http-host-header="localhost:11434"
```

195
## How can I allow additional web origins to access Ollama?
Jeffrey Morgan's avatar
Jeffrey Morgan committed
196

197
Ollama allows cross-origin requests from `127.0.0.1` and `0.0.0.0` by default. Additional origins can be configured with `OLLAMA_ORIGINS`.
Michael Yang's avatar
Michael Yang committed
198

199
200
201
202
203
204
205
For browser extensions, you'll need to explicitly allow the extension's origin pattern. Set `OLLAMA_ORIGINS` to include `chrome-extension://*`, `moz-extension://*`, and `safari-web-extension://*` if you wish to allow all browser extensions access, or specific extensions as needed:

```
# Allow all Chrome, Firefox, and Safari extensions
OLLAMA_ORIGINS=chrome-extension://*,moz-extension://*,safari-web-extension://* ollama serve
```

206
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
Michael Yang's avatar
Michael Yang committed
207

208
209
## Where are models stored?

210
- macOS: `~/.ollama/models`
Matt Williams's avatar
Matt Williams committed
211
- Linux: `/usr/share/ollama/.ollama/models`
212
- Windows: `C:\Users\%username%\.ollama\models`
213

214
215
216
### How do I set them to a different location?

If a different directory needs to be used, set the environment variable `OLLAMA_MODELS` to the chosen directory.
217

218
219
220
<Note>
  On Linux using the standard installer, the `ollama` user needs read and write access to the specified directory. To assign the directory to the `ollama` user run `sudo chown -R ollama:ollama <directory>`.
</Note>
221

222
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
223

Jeffrey Morgan's avatar
Jeffrey Morgan committed
224
## How can I use Ollama in Visual Studio Code?
225

226
There is already a large collection of plugins available for VS Code as well as other editors that leverage Ollama. See the list of [extensions & plugins](https://github.com/ollama/ollama#extensions--plugins) at the bottom of the main repository readme.
Michael Yang's avatar
Michael Yang committed
227

228
## How do I use Ollama with GPU acceleration in Docker?
Michael Yang's avatar
Michael Yang committed
229

230
The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit). See [ollama/ollama](https://hub.docker.com/r/ollama/ollama) for more details.
Michael Yang's avatar
Michael Yang committed
231
232

GPU acceleration is not available for Docker Desktop in macOS due to the lack of GPU passthrough and emulation.
233
234
235
236
237
238

## Why is networking slow in WSL2 on Windows 10?

This can impact both installing Ollama, as well as downloading models.

Open `Control Panel > Networking and Internet > View network status and tasks` and click on `Change adapter settings` on the left panel. Find the `vEthernel (WSL)` adapter, right click and select `Properties`.
239
Click on `Configure` and open the `Advanced` tab. Search through each of the properties until you find `Large Send Offload Version 2 (IPv4)` and `Large Send Offload Version 2 (IPv6)`. _Disable_ both of these
240
properties.
241

242
## How can I preload a model into Ollama to get faster response times?
243
244
245
246

If you are using the API you can preload a model by sending the Ollama server an empty request. This works with both the `/api/generate` and `/api/chat` API endpoints.

To preload the mistral model using the generate endpoint, use:
247

248
249
250
251
252
```shell
curl http://localhost:11434/api/generate -d '{"model": "mistral"}'
```

To use the chat completions endpoint, use:
253

254
255
256
257
```shell
curl http://localhost:11434/api/chat -d '{"model": "mistral"}'
```

258
To preload a model using the CLI, use the command:
259

260
```shell
261
ollama run llama3.2 ""
262
263
```

264
265
## How do I keep a model loaded in memory or make it unload immediately?

266
By default models are kept in memory for 5 minutes before being unloaded. This allows for quicker response times if you're making numerous requests to the LLM. If you want to immediately unload a model from memory, use the `ollama stop` command:
267

268
```shell
269
ollama stop llama3.2
270
271
272
```

If you're using the API, use the `keep_alive` parameter with the `/api/generate` and `/api/chat` endpoints to set the amount of time that a model stays in memory. The `keep_alive` parameter can be set to:
273
274
275
276
277

- a duration string (such as "10m" or "24h")
- a number in seconds (such as 3600)
- any negative number which will keep the model loaded in memory (e.g. -1 or "-1m")
- '0' which will unload the model immediately after generating a response
278
279

For example, to preload a model and leave it in memory use:
280

281
```shell
282
curl http://localhost:11434/api/generate -d '{"model": "llama3.2", "keep_alive": -1}'
283
284
285
```

To unload the model and free up memory use:
286

287
```shell
288
curl http://localhost:11434/api/generate -d '{"model": "llama3.2", "keep_alive": 0}'
Jeffrey Morgan's avatar
Jeffrey Morgan committed
289
```
290

291
Alternatively, you can change the amount of time all models are loaded into memory by setting the `OLLAMA_KEEP_ALIVE` environment variable when starting the Ollama server. The `OLLAMA_KEEP_ALIVE` variable uses the same parameter types as the `keep_alive` parameter types mentioned above. Refer to the section explaining [how to configure the Ollama server](#how-do-i-configure-ollama-server) to correctly set the environment variable.
292

293
The `keep_alive` API parameter with the `/api/generate` and `/api/chat` API endpoints will override the `OLLAMA_KEEP_ALIVE` setting.
294

295
## How do I manage the maximum number of requests the Ollama server can queue?
296

297
If too many requests are sent to the server, it will respond with a 503 error indicating the server is overloaded. You can adjust how many requests may be queue by setting `OLLAMA_MAX_QUEUE`.
298
299
300

## How does Ollama handle concurrent requests?

301
Ollama supports two levels of concurrent processing. If your system has sufficient available memory (system memory when using CPU inference, or VRAM for GPU inference) then multiple models can be loaded at the same time. For a given model, if there is sufficient available memory when the model is loaded, it is configured to allow parallel request processing.
302

303
If there is insufficient available memory to load a new model request while one or more models are already loaded, all new requests will be queued until the new model can be loaded. As prior models become idle, one or more will be unloaded to make room for the new model. Queued requests will be processed in order. When using GPU inference new models must be able to completely fit in VRAM to allow concurrent model loads.
304

305
Parallel request processing for a given model results in increasing the context size by the number of parallel requests. For example, a 2K context with 4 parallel requests will result in an 8K context and additional memory allocation.
306

307
The following server settings may be used to adjust how Ollama handles concurrent requests on most platforms:
308

309
310
- `OLLAMA_MAX_LOADED_MODELS` - The maximum number of models that can be loaded concurrently provided they fit in available memory. The default is 3 \* the number of GPUs or 3 for CPU inference.
- `OLLAMA_NUM_PARALLEL` - The maximum number of parallel requests each model will process at the same time. The default will auto-select either 4 or 1 based on available memory.
311
- `OLLAMA_MAX_QUEUE` - The maximum number of requests Ollama will queue when busy before rejecting additional requests. The default is 512
312

313
Note: Windows with Radeon GPUs currently default to 1 model maximum due to limitations in ROCm v5.7 for available VRAM reporting. Once ROCm v6.2 is available, Windows Radeon will follow the defaults above. You may enable concurrent model loads on Radeon on Windows, but ensure you don't load more models than will fit into your GPUs VRAM.
314
315
316

## How does Ollama load models on multiple GPUs?

317
When loading a new model, Ollama evaluates the required VRAM for the model against what is currently available. If the model will entirely fit on any single GPU, Ollama will load the model on that GPU. This typically provides the best performance as it reduces the amount of data transferring across the PCI bus during inference. If the model does not fit entirely on one GPU, then it will be spread across all the available GPUs.
318
319
320

## How can I enable Flash Attention?

321
Flash Attention is a feature of most modern models that can significantly reduce memory usage as the context size grows. To enable Flash Attention, set the `OLLAMA_FLASH_ATTENTION` environment variable to `1` when starting the Ollama server.
322
323
324
325
326
327
328

## How can I set the quantization type for the K/V cache?

The K/V context cache can be quantized to significantly reduce memory usage when Flash Attention is enabled.

To use quantized K/V cache with Ollama you can set the following environment variable:

329
- `OLLAMA_KV_CACHE_TYPE` - The quantization type for the K/V cache. Default is `f16`.
330

331
332
333
334
<Note>
  Currently this is a global option - meaning all models will run with the
  specified quantization type.
</Note>
335
336
337
338
339
340
341

The currently available K/V cache quantization types are:

- `f16` - high precision and memory usage (default).
- `q8_0` - 8-bit quantization, uses approximately 1/2 the memory of `f16` with a very small loss in precision, this usually has no noticeable impact on the model's quality (recommended if not using f16).
- `q4_0` - 4-bit quantization, uses approximately 1/4 the memory of `f16` with a small-medium loss in precision that may be more noticeable at higher context sizes.

342
How much the cache quantization impacts the model's response quality will depend on the model and the task. Models that have a high GQA count (e.g. Qwen2) may see a larger impact on precision from quantization than models with a low GQA count.
343
344

You may need to experiment with different quantization types to find the best balance between memory usage and quality.
Daniel Hiltgen's avatar
Daniel Hiltgen committed
345

346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
## Where can I find my Ollama Public Key?

Your **Ollama Public Key** is the public part of the key pair that lets your local Ollama instance talk to [ollama.com](https://ollama.com).

You'll need it to:
* Push models to Ollama
* Pull private models from Ollama to your machine
* Run models hosted in [Ollama Cloud](https://ollama.com/cloud)

### How to Add the Key

* **Sign-in via the Settings page** in the **Mac** and **Windows App**

* **Sign‑in via CLI**

```shell
ollama signin
```
Daniel Hiltgen's avatar
Daniel Hiltgen committed
364

365
366
* **Manually copy & paste** the key on the **Ollama Keys** page:
[https://ollama.com/settings/keys](https://ollama.com/settings/keys)
Daniel Hiltgen's avatar
Daniel Hiltgen committed
367

368
### Where the Ollama Public Key lives
Daniel Hiltgen's avatar
Daniel Hiltgen committed
369

370
371
372
373
374
| OS | Path to `id_ed25519.pub` |
| :- | :- |
| macOS 	| `~/.ollama/id_ed25519.pub`			|
| Linux		| `/usr/share/ollama/.ollama/id_ed25519.pub`	|
| Windows	| `C:\Users\<username>\.ollama\id_ed25519.pub`	|
Daniel Hiltgen's avatar
Daniel Hiltgen committed
375

376
377
378
<Note>
  Replace &lt;username&gt; with your actual Windows user name.
</Note>
379
380
381
382
383
384
385
386
387
388

## How can I stop Ollama from starting when I login to my computer

Ollama for Windows and macOS register as a login item during installation.  You can disable this if you prefer not to have Ollama automatically start.  Ollama will respect this setting across upgrades, unless you uninstall the application.

**Windows**
- In `Task Manager` go to the `Startup apps` tab, search for `ollama` then click `Disable`

**MacOS**
- Open `Settings` and search for "Login Items", find the `Ollama` entry under "Allow in the Background`, then click the slider to disable.