README.md 10.5 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
20
21
### Windows

Coming soon!

22
23
24
25
26
27
28
29
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

30
### Docker
31

Jeffrey Morgan's avatar
Jeffrey Morgan committed
32
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
33

34
35
## Quickstart

36
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
37
38
39
40
41
42
43

```
ollama run llama2
```

## Model library

44
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
45

46
Here are some example open-source models that can be downloaded:
47

48
49
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
50
51
| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`       |
| Starling           | 7B         | 4.1GB | `ollama run starling-lm`       |
Michael's avatar
Michael committed
52
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
53
54
55
56
57
58
59
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
60
| LLaVA              | 7B         | 4.5GB | `ollama run llava`             |
61

Jeffrey Morgan's avatar
Jeffrey Morgan committed
62
63
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

64
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
65

66
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
67

68
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
69

70
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
71

72
73
74
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
75

76
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
77

78
   ```
79
   ollama create example -f Modelfile
80
   ```
81

82
3. Run the model
Michael Yang's avatar
Michael Yang committed
83

84
   ```
85
   ollama run example
86
   ```
Michael Yang's avatar
Michael Yang committed
87

88
89
90
91
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

92
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
93

Jeffrey Morgan's avatar
Jeffrey Morgan committed
94
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
95
96

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
97
ollama pull llama2
98
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
99

100
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
101

Jeffrey Morgan's avatar
Jeffrey Morgan committed
102
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
103
FROM llama2
104
105
106
107

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

108
# set the system message
Jeffrey Morgan's avatar
Jeffrey Morgan committed
109
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
110
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
111
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
112
```
Bruce MacDonald's avatar
Bruce MacDonald committed
113

114
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
115
116

```
117
118
119
120
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
121
122
```

123
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
124
125
126
127
128
129

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
130

131
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
132

133
```
134
ollama pull llama2
135
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
136

137
138
139
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
140
141

```
Michael Yang's avatar
Michael Yang committed
142
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
143
144
```

145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
162
163
164
165
166
167
168
### Multimodal models

```
>>> What's in this image? /Users/jmorgan/Desktop/smile.png
The image features a yellow smiley face, which is likely the central focus of the picture.
```

169
170
171
### Pass in prompt as arguments

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
172
$ ollama run llama2 "Summarize this file: $(cat README.md)"
173
174
175
176
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
177

178
179
180
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
181

182
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
183

184
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
185

Jeffrey Morgan's avatar
Jeffrey Morgan committed
186
187
## Building

188
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
189

Jeffrey Morgan's avatar
Jeffrey Morgan committed
190
```
James Braza's avatar
James Braza committed
191
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
192
193
```

194
Then generate dependencies:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
195
196
```
go generate ./...
197
198
199
```
Then build the binary:
```
Michael Yang's avatar
Michael Yang committed
200
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
201
202
```

203
### Linux/Windows CUDA (NVIDIA)
204
205
*Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!*

206
207
208
Note: at present, Ollama is optimized for GPU usage on linux, and requires the CUDA libraries at a minimum to compile even if you do not have an NVIDIA GPU.

Install `cmake` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) development and runtime packages.
209
210
Then generate dependencies:
```
211
go generate ./...
212
213
214
```
Then build the binary:
```
215
go build .
216
217
```

218
### Linux ROCm (AMD)
219
220
221
222
223
*Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!*

Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) developement packages first, as well as `cmake` and `golang`.
Adjust the paths below (correct for Arch) as appropriate for your distributions install locations and generate dependencies:
```
224
CLBlast_DIR=/usr/lib/cmake/CLBlast ROCM_PATH=/opt/rocm go generate ./...
225
226
227
```
Then build the binary:
```
228
go build .
229
230
```

231
232
ROCm requires elevated privileges to access the GPU at runtime.  On most distros you can add your user account to the `render` group, or run as root.

233
### Running local builds
Jeffrey Morgan's avatar
Jeffrey Morgan committed
234
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
235

Jeffrey Morgan's avatar
Jeffrey Morgan committed
236
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
237
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
238
239
```

Michael Yang's avatar
Michael Yang committed
240
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
241
242

```
243
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
244
```
245
246
247

## REST API

James Braza's avatar
James Braza committed
248
Ollama has a REST API for running and managing models.
249
250

### Generate a response
251
252

```
253
curl http://localhost:11434/api/generate -d '{
254
255
256
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
257
```
Nate Sesti's avatar
Nate Sesti committed
258

259
### Chat with a model
Bruce MacDonald's avatar
Bruce MacDonald committed
260
261
262
263

```
curl http://localhost:11434/api/chat -d '{
  "model": "mistral",
264
265
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
Bruce MacDonald's avatar
Bruce MacDonald committed
266
267
268
269
  ]
}'
```

James Braza's avatar
James Braza committed
270
271
See the [API documentation](./docs/api.md) for all endpoints.

272
273
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
274
### Web & Desktop
275
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
276
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
277
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
278
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
279
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
280
- [Web UI](https://github.com/ollama-webui/ollama-webui)
281
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
Enrico Ros's avatar
Enrico Ros committed
282
- [big-AGI](https://github.com/enricoros/big-agi/blob/main/docs/config-ollama.md)
283
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
284
- [Amica](https://github.com/semperai/amica)
285
- [chatd](https://github.com/BruceMacD/chatd)
286

Jeffrey Morgan's avatar
Jeffrey Morgan committed
287
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
288

289
290
291
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
292
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
293
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
294
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
295
- [gptel Emacs client](https://github.com/karthink/gptel)
296
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
297
- [cmdh](https://github.com/pgibler/cmdh)
298

Jorge Torres's avatar
Jorge Torres committed
299
300
### Database

301
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md)
302

Matt Williams's avatar
Matt Williams committed
303
### Package managers
304

Matt Williams's avatar
Matt Williams committed
305
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
306

307
### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
308

309
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
310
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
311
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
312
- [LiteLLM](https://github.com/BerriAI/litellm)
313
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
314
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
315
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
316
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
317
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
318
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
319
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
320

Jeffrey Morgan's avatar
Jeffrey Morgan committed
321
322
### Mobile

323
324
- [Enchanted](https://github.com/AugustDev/enchanted)
- [Maid](https://github.com/danemadsen/Maid)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
325

Jeffrey Morgan's avatar
Jeffrey Morgan committed
326
327
### Extensions & Plugins

328
329
330
331
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
332
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
333
334
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
ruecat's avatar
ruecat committed
335
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
336
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
337
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
Steve Korshakov's avatar
Steve Korshakov committed
338
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
339
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)