faq.md 8.99 KB
Newer Older
1
2
# FAQ

Matt Williams's avatar
Matt Williams committed
3
## How can I upgrade Ollama?
4

Jeffrey Morgan's avatar
Jeffrey Morgan committed
5
6
7
8
9
10
11
Ollama on macOS and Windows will automatically download updates. Click on the taskbar or menubar item and then click "Restart to update" to apply the update. Updates can also be installed by downloading the latest version [manually](https://ollama.com/download/).

On Linux, re-run the install script:

```
curl -fsSL https://ollama.com/install.sh | sh
```
12

Matt Williams's avatar
Matt Williams committed
13
## How can I view the logs?
14

Matt Williams's avatar
Matt Williams committed
15
Review the [Troubleshooting](./troubleshooting.md) docs for more about using logs.
16

Jeffrey Morgan's avatar
Jeffrey Morgan committed
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
## How can I specify the context window size?

By default, Ollama uses a context window size of 2048 tokens.

To change this when using `ollama run`, use `/set parameter`:

```
/set parameter num_ctx 4096
```

When using the API, specify the `num_ctx` parameter:

```
curl http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt": "Why is the sky blue?",
  "options": {
    "num_ctx": 4096
  }
}'
```

39
## How do I configure Ollama server?
40

41
Ollama server can be configured with environment variables.
42

43
### Setting environment variables on Mac
44

45
If Ollama is run as a macOS application, environment variables should be set using `launchctl`:
Michael Yang's avatar
Michael Yang committed
46

47
1. For each environment variable, call `launchctl setenv`.
Michael Yang's avatar
Michael Yang committed
48

49
50
51
    ```bash
    launchctl setenv OLLAMA_HOST "0.0.0.0"
    ```
52

53
2. Restart Ollama application.
Michael Yang's avatar
Michael Yang committed
54

55
### Setting environment variables on Linux
Michael Yang's avatar
Michael Yang committed
56

57
If Ollama is run as a systemd service, environment variables should be set using `systemctl`:
Michael Yang's avatar
Michael Yang committed
58

59
1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
60

61
62
63
64
65
66
67
68
69
70
2. For each environment variable, add a line `Environment` under section `[Service]`:

    ```ini
    [Service]
    Environment="OLLAMA_HOST=0.0.0.0"
    ```

3. Save and exit.

4. Reload `systemd` and restart Ollama:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
71

Matt Williams's avatar
Matt Williams committed
72
73
74
75
   ```bash
   systemctl daemon-reload
   systemctl restart ollama
   ```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
76

77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
### Setting environment variables on Windows

On windows, Ollama inherits your user and system environment variables.

1. First Quit Ollama by clicking on it in the task bar

2. Edit system environment variables from the control panel

3. Edit or create New variable(s) for your user account for `OLLAMA_HOST`, `OLLAMA_MODELS`, etc.

4. Click OK/Apply to save 

5. Run `ollama` from a new terminal window 


Matt Williams's avatar
Matt Williams committed
92
## How can I expose Ollama on my network?
Jeffrey Morgan's avatar
Jeffrey Morgan committed
93

94
Ollama binds 127.0.0.1 port 11434 by default. Change the bind address with the `OLLAMA_HOST` environment variable.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
95

96
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
97

jmorganca's avatar
jmorganca committed
98
99
## How can I use Ollama with a proxy server?

Jeffrey Morgan's avatar
Jeffrey Morgan committed
100
Ollama runs an HTTP server and can be exposed using a proxy server such as Nginx. To do so, configure the proxy to forward requests and optionally set required headers (if not exposing Ollama on the network). For example, with Nginx:
jmorganca's avatar
jmorganca committed
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120

```
server {
    listen 80;
    server_name example.com;  # Replace with your domain or IP
    location / {
        proxy_pass http://localhost:11434;
        proxy_set_header Host localhost:11434;
    }
}
```

## How can I use Ollama with ngrok?

Ollama can be accessed using a range of tools for tunneling tools. For example with Ngrok:

```
ngrok http 11434 --host-header="localhost:11434"
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
121
122
123
124
125
126
127
128
## How can I use Ollama with Cloudflare Tunnel?

To use Ollama with Cloudflare Tunnel, use the `--url` and `--http-host-header` flags:

```
cloudflared tunnel --url http://localhost:11434 --http-host-header="localhost:11434"
```

129
## How can I allow additional web origins to access Ollama?
Jeffrey Morgan's avatar
Jeffrey Morgan committed
130

131
Ollama allows cross-origin requests from `127.0.0.1` and `0.0.0.0` by default. Additional origins can be configured with `OLLAMA_ORIGINS`.
Michael Yang's avatar
Michael Yang committed
132

133
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
Michael Yang's avatar
Michael Yang committed
134

135
136
## Where are models stored?

137
- macOS: `~/.ollama/models`
Matt Williams's avatar
Matt Williams committed
138
- Linux: `/usr/share/ollama/.ollama/models`
139
- Windows: `C:\Users\<username>\.ollama\models`
140

141
142
143
### How do I set them to a different location?

If a different directory needs to be used, set the environment variable `OLLAMA_MODELS` to the chosen directory.
144

145
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
146

Jeffrey Morgan's avatar
Jeffrey Morgan committed
147
## Does Ollama send my prompts and answers back to ollama.com?
148

Jeffrey Morgan's avatar
Jeffrey Morgan committed
149
No. Ollama runs locally, and conversation data does not leave your machine.
150

Jeffrey Morgan's avatar
Jeffrey Morgan committed
151
## How can I use Ollama in Visual Studio Code?
152

Matt Williams's avatar
Matt Williams committed
153
There is already a large collection of plugins available for VSCode as well as other editors that leverage Ollama. See the list of [extensions & plugins](https://github.com/jmorganca/ollama#extensions--plugins) at the bottom of the main repository readme.
Michael Yang's avatar
Michael Yang committed
154
155
156

## How do I use Ollama behind a proxy?

Matt Williams's avatar
Matt Williams committed
157
Ollama is compatible with proxy servers if `HTTP_PROXY` or `HTTPS_PROXY` are configured. When using either variables, ensure it is set where `ollama serve` can access the values. When using `HTTPS_PROXY`, ensure the proxy certificate is installed as a system certificate. Refer to the section above for how to use environment variables on your platform.
Michael Yang's avatar
Michael Yang committed
158
159
160

### How do I use Ollama behind a proxy in Docker?

Michael Yang's avatar
Michael Yang committed
161
The Ollama Docker container image can be configured to use a proxy by passing `-e HTTPS_PROXY=https://proxy.example.com` when starting the container.
Michael Yang's avatar
Michael Yang committed
162

Matt Williams's avatar
Matt Williams committed
163
Alternatively, the Docker daemon can be configured to use a proxy. Instructions are available for Docker Desktop on [macOS](https://docs.docker.com/desktop/settings/mac/#proxies), [Windows](https://docs.docker.com/desktop/settings/windows/#proxies), and [Linux](https://docs.docker.com/desktop/settings/linux/#proxies), and Docker [daemon with systemd](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy).
Michael Yang's avatar
Michael Yang committed
164
165
166
167
168
169

Ensure the certificate is installed as a system certificate when using HTTPS. This may require a new Docker image when using a self-signed certificate.

```dockerfile
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
ftorto's avatar
ftorto committed
170
RUN update-ca-certificates
Michael Yang's avatar
Michael Yang committed
171
172
173
174
175
176
177
178
```

Build and run this image:

```shell
docker build -t ollama-with-ca .
docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama-with-ca
```
Michael Yang's avatar
Michael Yang committed
179

180
## How do I use Ollama with GPU acceleration in Docker?
Michael Yang's avatar
Michael Yang committed
181

182
The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit). See [ollama/ollama](https://hub.docker.com/r/ollama/ollama) for more details.
Michael Yang's avatar
Michael Yang committed
183
184

GPU acceleration is not available for Docker Desktop in macOS due to the lack of GPU passthrough and emulation.
185
186
187
188
189
190
191
192

## Why is networking slow in WSL2 on Windows 10?

This can impact both installing Ollama, as well as downloading models.

Open `Control Panel > Networking and Internet > View network status and tasks` and click on `Change adapter settings` on the left panel. Find the `vEthernel (WSL)` adapter, right click and select `Properties`.
Click on `Configure` and open the `Advanced` tab. Search through each of the properties until you find `Large Send Offload Version 2 (IPv4)` and `Large Send Offload Version 2 (IPv6)`. *Disable* both of these
properties.
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225

## How can I pre-load a model to get faster response times?

If you are using the API you can preload a model by sending the Ollama server an empty request. This works with both the `/api/generate` and `/api/chat` API endpoints.

To preload the mistral model using the generate endpoint, use:
```shell
curl http://localhost:11434/api/generate -d '{"model": "mistral"}'
```

To use the chat completions endpoint, use:
```shell
curl http://localhost:11434/api/chat -d '{"model": "mistral"}'
```

## How do I keep a model loaded in memory or make it unload immediately?

By default models are kept in memory for 5 minutes before being unloaded. This allows for quicker response times if you are making numerous requests to the LLM. You may, however, want to free up the memory before the 5 minutes have elapsed or keep the model loaded indefinitely. Use the `keep_alive` parameter with either the `/api/generate` and `/api/chat` API endpoints to control how long the model is left in memory.

The `keep_alive` parameter can be set to:
* a duration string (such as "10m" or "24h")
* a number in seconds (such as 3600)
* any negative number which will keep the model loaded in memory (e.g. -1 or "-1m")
* '0' which will unload the model immediately after generating a response

For example, to preload a model and leave it in memory use:
```shell
curl http://localhost:11434/api/generate -d '{"model": "llama2", "keep_alive": -1}'
```

To unload the model and free up memory use:
```shell
curl http://localhost:11434/api/generate -d '{"model": "llama2", "keep_alive": 0}'
Jeffrey Morgan's avatar
Jeffrey Morgan committed
226
```
227
228
229
230
231
232
233
234
235

## Controlling which GPUs to use

By default, on Linux and Windows, Ollama will attempt to use Nvidia GPUs, or
Radeon GPUs, and will use all the GPUs it can find. You can limit which GPUs
will be utilized by setting the environment variable `CUDA_VISIBLE_DEVICES` for
NVIDIA cards, or `HIP_VISIBLE_DEVICES` for Radeon GPUs to a comma delimited list
of GPU IDs.  You can see the list of devices with GPU tools such as `nvidia-smi` or
`rocminfo`. You can set to an invalid GPU ID (e.g., "-1") to bypass the GPU and
Jeffrey Morgan's avatar
Jeffrey Morgan committed
236
fallback to CPU.