README.md 9.99 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
### Windows

20
Coming soon! For now, you can install Ollama on Windows via WSL2.
21

22
23
24
25
26
27
28
29
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

30
### Docker
31

Jeffrey Morgan's avatar
Jeffrey Morgan committed
32
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
33

Jeffrey Morgan's avatar
Jeffrey Morgan committed
34
35
36
37
38
### Libraries

- [ollama-python](https://github.com/ollama/ollama-python)
- [ollama-js](https://github.com/ollama/ollama-js)

39
40
## Quickstart

41
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
42
43
44
45
46
47
48

```
ollama run llama2
```

## Model library

49
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
50

51
Here are some example open-source models that can be downloaded:
52

53
54
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
55
56
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
57
| Dolphin Phi        | 2.7B       | 1.6GB | `ollama run dolphin-phi`       |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
58
| Phi-2              | 2.7B       | 1.7GB | `ollama run phi`               |
Michael's avatar
Michael committed
59
60
| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`       |
| Starling           | 7B         | 4.1GB | `ollama run starling-lm`       |
61
62
63
64
65
66
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
67
| LLaVA              | 7B         | 4.5GB | `ollama run llava`             |
68

Matt Williams's avatar
Matt Williams committed
69
> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
70

Jeffrey Morgan's avatar
Jeffrey Morgan committed
71
## Customize a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
72

73
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
74

75
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
76

77
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
78

79
80
81
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
82

83
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
84

85
   ```
86
   ollama create example -f Modelfile
87
   ```
88

89
3. Run the model
Michael Yang's avatar
Michael Yang committed
90

91
   ```
92
   ollama run example
93
   ```
Michael Yang's avatar
Michael Yang committed
94

95
96
97
98
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

99
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
100

Jeffrey Morgan's avatar
Jeffrey Morgan committed
101
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
102
103

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
104
ollama pull llama2
105
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
106

107
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
108

Jeffrey Morgan's avatar
Jeffrey Morgan committed
109
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
110
FROM llama2
111
112
113
114

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

115
# set the system message
Jeffrey Morgan's avatar
Jeffrey Morgan committed
116
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
117
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
118
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
119
```
Bruce MacDonald's avatar
Bruce MacDonald committed
120

121
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
122
123

```
124
125
126
127
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
128
129
```

130
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
131
132
133
134
135
136

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
137

Matt Williams's avatar
Matt Williams committed
138
139
140
141
```
ollama create mymodel -f ./Modelfile
```

142
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
143

144
```
145
ollama pull llama2
146
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
147

148
149
150
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
151
152

```
Michael Yang's avatar
Michael Yang committed
153
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
154
155
```

156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
173
174
175
176
177
178
179
### Multimodal models

```
>>> What's in this image? /Users/jmorgan/Desktop/smile.png
The image features a yellow smiley face, which is likely the central focus of the picture.
```

180
181
182
### Pass in prompt as arguments

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
183
$ ollama run llama2 "Summarize this file: $(cat README.md)"
184
185
186
187
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
188

189
190
191
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
192

193
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
194

195
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
196

Jeffrey Morgan's avatar
Jeffrey Morgan committed
197
198
## Building

199
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
200

Jeffrey Morgan's avatar
Jeffrey Morgan committed
201
```
James Braza's avatar
James Braza committed
202
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
203
204
```

205
Then generate dependencies:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
206
207
```
go generate ./...
208
209
210
```
Then build the binary:
```
Michael Yang's avatar
Michael Yang committed
211
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
212
213
```

214
More detailed instructions can be found in the [developer guide](https://github.com/jmorganca/ollama/blob/main/docs/development.md)
215

216

217
### Running local builds
Jeffrey Morgan's avatar
Jeffrey Morgan committed
218
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
219

Jeffrey Morgan's avatar
Jeffrey Morgan committed
220
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
221
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
222
223
```

Michael Yang's avatar
Michael Yang committed
224
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
225
226

```
227
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
228
```
229
230
231

## REST API

James Braza's avatar
James Braza committed
232
Ollama has a REST API for running and managing models.
233
234

### Generate a response
235
236

```
237
curl http://localhost:11434/api/generate -d '{
238
239
240
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
241
```
Nate Sesti's avatar
Nate Sesti committed
242

243
### Chat with a model
Bruce MacDonald's avatar
Bruce MacDonald committed
244
245
246
247

```
curl http://localhost:11434/api/chat -d '{
  "model": "mistral",
248
249
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
Bruce MacDonald's avatar
Bruce MacDonald committed
250
251
252
253
  ]
}'
```

James Braza's avatar
James Braza committed
254
255
See the [API documentation](./docs/api.md) for all endpoints.

256
257
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
258
### Web & Desktop
259
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
260
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
261
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
262
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
263
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
264
- [Web UI](https://github.com/ollama-webui/ollama-webui)
265
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
Enrico Ros's avatar
Enrico Ros committed
266
- [big-AGI](https://github.com/enricoros/big-agi/blob/main/docs/config-ollama.md)
267
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
268
- [Amica](https://github.com/semperai/amica)
269
- [chatd](https://github.com/BruceMacD/chatd)
270
271
- [Ollama-SwiftUI](https://github.com/kghandour/Ollama-SwiftUI)

272

Jeffrey Morgan's avatar
Jeffrey Morgan committed
273
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
274

275
276
277
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
278
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
279
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
280
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
281
- [gptel Emacs client](https://github.com/karthink/gptel)
282
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
283
- [cmdh](https://github.com/pgibler/cmdh)
284

Jorge Torres's avatar
Jorge Torres committed
285
286
### Database

287
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md)
288

Matt Williams's avatar
Matt Williams committed
289
### Package managers
290

Matt Williams's avatar
Matt Williams committed
291
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
292

293
### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
294

295
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
296
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
297
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
298
- [LiteLLM](https://github.com/BerriAI/litellm)
299
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
300
- [Ollama for Ruby](https://github.com/gbaptista/ollama-ai)
301
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
302
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
303
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
304
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
305
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
306
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
307
- [LangChainDart](https://github.com/davidmigloz/langchain_dart)
308
- [Semantic Kernel - Python](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama)
309
310
- [Haystack](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md)

311

Jeffrey Morgan's avatar
Jeffrey Morgan committed
312
313
### Mobile

314
- [Enchanted](https://github.com/AugustDev/enchanted)
Dane Madsen's avatar
Dane Madsen committed
315
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
316

Jeffrey Morgan's avatar
Jeffrey Morgan committed
317
318
### Extensions & Plugins

319
320
321
322
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
323
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
324
325
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
ruecat's avatar
ruecat committed
326
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
327
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
328
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
Steve Korshakov's avatar
Steve Korshakov committed
329
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
330
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
331
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)