faq.md 5.34 KB
Newer Older
1
2
# FAQ

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
## How can I view the logs?

On macOS:

```
cat ~/.ollama/logs/server.log
```

On Linux:

```
journalctl -u ollama
```

If you're running `ollama serve` directly, the logs will be printed to the console.

Jeffrey Morgan's avatar
Jeffrey Morgan committed
19
## How can I expose Ollama on my network?
20

Michael Yang's avatar
Michael Yang committed
21
22
23
24
Ollama binds to 127.0.0.1 port 11434 by default. Change the bind address with the `OLLAMA_HOST` environment variable.

On macOS:

25
```bash
26
OLLAMA_HOST=0.0.0.0:11434 ollama serve
27
28
```

Michael Yang's avatar
Michael Yang committed
29
30
On Linux:

Jeffrey Morgan's avatar
Jeffrey Morgan committed
31
Create a `systemd` drop-in directory and set `Environment=OLLAMA_HOST`
Michael Yang's avatar
Michael Yang committed
32
33
34

```bash
mkdir -p /etc/systemd/system/ollama.service.d
Michael Yang's avatar
Michael Yang committed
35
echo '[Service]' >>/etc/systemd/system/ollama.service.d/environment.conf
Michael Yang's avatar
Michael Yang committed
36
37
38
```

```bash
Michael Yang's avatar
Michael Yang committed
39
echo 'Environment="OLLAMA_HOST=0.0.0.0:11434"' >>/etc/systemd/system/ollama.service.d/environment.conf
Michael Yang's avatar
Michael Yang committed
40
41
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Reload `systemd` and restart Ollama:

```bash
systemctl daemon-reload
systemctl restart ollama
```

## How can I allow additional web origins to access Ollama?

Ollama allows cross origin requests from `127.0.0.1` and `0.0.0.0` by default. Add additional origins with the `OLLAMA_ORIGINS` environment variable:

On macOS:

```bash
OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve
```

On Linux:

Michael Yang's avatar
Michael Yang committed
61
```bash
Michael Yang's avatar
Michael Yang committed
62
echo 'Environment="OLLAMA_ORIGINS=http://129.168.1.1:*,https://example.com"' >>/etc/systemd/system/ollama.service.d/environment.conf
Michael Yang's avatar
Michael Yang committed
63
64
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
65
Reload `systemd` and restart Ollama:
Michael Yang's avatar
Michael Yang committed
66
67
68
69
70
71

```bash
systemctl daemon-reload
systemctl restart ollama
```

72
73
## Where are models stored?

74
75
- macOS: Raw model data is stored under `~/.ollama/models`.
- Linux: Raw model data is stored under `/usr/share/ollama/.ollama/models`
76

77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
Below the models directory you will find a structure similar to the following:

```shell
.
├── blobs
└── manifests
   └── registry.ollama.ai
      ├── f0rodo
      ├── library
      ├── mattw
      └── saikatkumardey
```

There is a `manifests/registry.ollama.ai/namespace` path. In example above, the user has downloaded models from the official `library`, `f0rodo`, `mattw`, and `saikatkumardey` namespaces. Within each of those directories, you will find directories for each of the models downloaded. And in there you will find a file name representing each tag. Each tag file is the manifest for the model.  

The manifest lists all the layers used in this model. You will see a `media type` for each layer, along with a digest. That digest corresponds with a file in the `models/blobs directory`.

94
95
96
### How can I change where Ollama stores models?

To modify where models are stored, you can use the `OLLAMA_MODELS` environment variable. Note that on Linux this means defining `OLLAMA_MODELS` in a drop-in `/etc/systemd/system/ollama.service.d` service file, reloading systemd, and restarting the ollama service.
97
98
99
100
101

## Does Ollama send my prompts and answers back to Ollama.ai to use in any way?

No. Anything you do with Ollama, such as generate a response from the model, stays with you. We don't collect any data about how you use the model. You are always in control of your own data.

Jeffrey Morgan's avatar
Jeffrey Morgan committed
102
## How can I use Ollama in Visual Studio Code?
103

Jeffrey Morgan's avatar
Jeffrey Morgan committed
104
There is already a large collection of plugins available for VSCode as well as other editors that leverage Ollama. You can see the list of [extensions & plugins](https://github.com/jmorganca/ollama#extensions--plugins) at the bottom of the main repository readme.
Michael Yang's avatar
Michael Yang committed
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120

## How do I use Ollama behind a proxy?

Ollama is compatible with proxy servers if `HTTP_PROXY` or `HTTPS_PROXY` are configured. When using either variables, ensure it is set where `ollama serve` can access the values.

When using `HTTPS_PROXY`, ensure the proxy certificate is installed as a system certificate.

On macOS:

```bash
HTTPS_PROXY=http://proxy.example.com ollama serve
```

On Linux:

```bash
Michael Yang's avatar
Michael Yang committed
121
echo 'Environment="HTTPS_PROXY=https://proxy.example.com"' >>/etc/systemd/system/ollama.service.d/environment.conf
Michael Yang's avatar
Michael Yang committed
122
123
124
125
126
127
128
129
130
131
132
```

Reload `systemd` and restart Ollama:

```bash
systemctl daemon-reload
systemctl restart ollama
```

### How do I use Ollama behind a proxy in Docker?

Michael Yang's avatar
Michael Yang committed
133
The Ollama Docker container image can be configured to use a proxy by passing `-e HTTPS_PROXY=https://proxy.example.com` when starting the container.
Michael Yang's avatar
Michael Yang committed
134
135

Alternatively, Docker daemon can be configured to use a proxy. Instructions are available for Docker Desktop on [macOS](https://docs.docker.com/desktop/settings/mac/#proxies), [Windows](https://docs.docker.com/desktop/settings/windows/#proxies), and [Linux](https://docs.docker.com/desktop/settings/linux/#proxies), and Docker [daemon with systemd](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy).
Michael Yang's avatar
Michael Yang committed
136
137
138
139
140
141

Ensure the certificate is installed as a system certificate when using HTTPS. This may require a new Docker image when using a self-signed certificate.

```dockerfile
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
ftorto's avatar
ftorto committed
142
RUN update-ca-certificates
Michael Yang's avatar
Michael Yang committed
143
144
145
146
147
148
149
150
```

Build and run this image:

```shell
docker build -t ollama-with-ca .
docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama-with-ca
```
Michael Yang's avatar
Michael Yang committed
151

152
## How do I use Ollama with GPU acceleration in Docker?
Michael Yang's avatar
Michael Yang committed
153

154
The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit). See [ollama/ollama](https://hub.docker.com/r/ollama/ollama) for more details.
Michael Yang's avatar
Michael Yang committed
155
156

GPU acceleration is not available for Docker Desktop in macOS due to the lack of GPU passthrough and emulation.