"vscode:/vscode.git/clone" did not exist on "069ff336a2ddd2432461e95276746420716c8432"
faq.md 5.08 KB
Newer Older
1
2
# FAQ

Matt Williams's avatar
Matt Williams committed
3
## How can I upgrade Ollama?
4

Matt Williams's avatar
Matt Williams committed
5
To upgrade Ollama, run the installation process again. On the Mac, click the Ollama icon in the menubar and choose the restart option if an update is available.
6

Matt Williams's avatar
Matt Williams committed
7
## How can I view the logs?
8

Matt Williams's avatar
Matt Williams committed
9
Review the [Troubleshooting](./troubleshooting.md) docs for more about using logs.
10

11
## How do I configure Ollama server?
12

13
Ollama server can be configured with environment variables.
14

15
### Setting environment variables on Mac
16

17
If Ollama is run as a macOS application, environment variables should be set using `launchctl`:
Michael Yang's avatar
Michael Yang committed
18

19
1. For each environment variable, call `launchctl setenv`.
Michael Yang's avatar
Michael Yang committed
20

21
22
23
    ```bash
    launchctl setenv OLLAMA_HOST "0.0.0.0"
    ```
24

25
2. Restart Ollama application.
Michael Yang's avatar
Michael Yang committed
26

27
### Setting environment variables on Linux
Michael Yang's avatar
Michael Yang committed
28

29
If Ollama is run as a systemd service, environment variables should be set using `systemctl`:
Michael Yang's avatar
Michael Yang committed
30

31
1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
32

33
34
35
36
37
38
39
40
41
42
2. For each environment variable, add a line `Environment` under section `[Service]`:

    ```ini
    [Service]
    Environment="OLLAMA_HOST=0.0.0.0"
    ```

3. Save and exit.

4. Reload `systemd` and restart Ollama:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
43

Matt Williams's avatar
Matt Williams committed
44
45
46
47
   ```bash
   systemctl daemon-reload
   systemctl restart ollama
   ```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
48

Matt Williams's avatar
Matt Williams committed
49
## How can I expose Ollama on my network?
Jeffrey Morgan's avatar
Jeffrey Morgan committed
50

51
Ollama binds 127.0.0.1 port 11434 by default. Change the bind address with the `OLLAMA_HOST` environment variable.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
52

53
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
54

55
## How can I allow additional web origins to access Ollama?
Jeffrey Morgan's avatar
Jeffrey Morgan committed
56

57
Ollama allows cross-origin requests from `127.0.0.1` and `0.0.0.0` by default. Additional origins can be configured with `OLLAMA_ORIGINS`.
Michael Yang's avatar
Michael Yang committed
58

59
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
Michael Yang's avatar
Michael Yang committed
60

61
62
## Where are models stored?

63
- macOS: `~/.ollama/models`
Matt Williams's avatar
Matt Williams committed
64
- Linux: `/usr/share/ollama/.ollama/models`
65
- Windows: `C:\Users\<username>\.ollama\models`
66

67
68
69
### How do I set them to a different location?

If a different directory needs to be used, set the environment variable `OLLAMA_MODELS` to the chosen directory.
70

71
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
72
73
74

## Does Ollama send my prompts and answers back to Ollama.ai to use in any way?

Matt Williams's avatar
Matt Williams committed
75
No, Ollama runs entirely locally, and conversation data will never leave your machine.
76

Jeffrey Morgan's avatar
Jeffrey Morgan committed
77
## How can I use Ollama in Visual Studio Code?
78

Matt Williams's avatar
Matt Williams committed
79
There is already a large collection of plugins available for VSCode as well as other editors that leverage Ollama. See the list of [extensions & plugins](https://github.com/jmorganca/ollama#extensions--plugins) at the bottom of the main repository readme.
Michael Yang's avatar
Michael Yang committed
80
81
82

## How do I use Ollama behind a proxy?

Matt Williams's avatar
Matt Williams committed
83
Ollama is compatible with proxy servers if `HTTP_PROXY` or `HTTPS_PROXY` are configured. When using either variables, ensure it is set where `ollama serve` can access the values. When using `HTTPS_PROXY`, ensure the proxy certificate is installed as a system certificate. Refer to the section above for how to use environment variables on your platform.
Michael Yang's avatar
Michael Yang committed
84
85
86

### How do I use Ollama behind a proxy in Docker?

Michael Yang's avatar
Michael Yang committed
87
The Ollama Docker container image can be configured to use a proxy by passing `-e HTTPS_PROXY=https://proxy.example.com` when starting the container.
Michael Yang's avatar
Michael Yang committed
88

Matt Williams's avatar
Matt Williams committed
89
Alternatively, the Docker daemon can be configured to use a proxy. Instructions are available for Docker Desktop on [macOS](https://docs.docker.com/desktop/settings/mac/#proxies), [Windows](https://docs.docker.com/desktop/settings/windows/#proxies), and [Linux](https://docs.docker.com/desktop/settings/linux/#proxies), and Docker [daemon with systemd](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy).
Michael Yang's avatar
Michael Yang committed
90
91
92
93
94
95

Ensure the certificate is installed as a system certificate when using HTTPS. This may require a new Docker image when using a self-signed certificate.

```dockerfile
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
ftorto's avatar
ftorto committed
96
RUN update-ca-certificates
Michael Yang's avatar
Michael Yang committed
97
98
99
100
101
102
103
104
```

Build and run this image:

```shell
docker build -t ollama-with-ca .
docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama-with-ca
```
Michael Yang's avatar
Michael Yang committed
105

106
## How do I use Ollama with GPU acceleration in Docker?
Michael Yang's avatar
Michael Yang committed
107

108
The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit). See [ollama/ollama](https://hub.docker.com/r/ollama/ollama) for more details.
Michael Yang's avatar
Michael Yang committed
109
110

GPU acceleration is not available for Docker Desktop in macOS due to the lack of GPU passthrough and emulation.
111
112
113
114
115
116
117
118

## Why is networking slow in WSL2 on Windows 10?

This can impact both installing Ollama, as well as downloading models.

Open `Control Panel > Networking and Internet > View network status and tasks` and click on `Change adapter settings` on the left panel. Find the `vEthernel (WSL)` adapter, right click and select `Properties`.
Click on `Configure` and open the `Advanced` tab. Search through each of the properties until you find `Large Send Offload Version 2 (IPv4)` and `Large Send Offload Version 2 (IPv6)`. *Disable* both of these
properties.