README.md 10.8 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
<div align="center">
Jeffrey Morgan's avatar
Jeffrey Morgan committed
2
  <img alt="ollama" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
3
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
4

Bruce MacDonald's avatar
Bruce MacDonald committed
5
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
6

7
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
8

9
Get up and running with large language models locally.
10

11
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
12

13
[Download](https://ollama.com/download/Ollama-darwin.zip)
14

Michael's avatar
Michael committed
15
### Windows preview
16

Michael's avatar
Michael committed
17
[Download](https://ollama.com/download/OllamaSetup.exe)
18

Michael's avatar
Michael committed
19
### Linux
20
21

```
22
curl -fsSL https://ollama.com/install.sh | sh
23
24
25
26
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

27
### Docker
28

Jeffrey Morgan's avatar
Jeffrey Morgan committed
29
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
30

Jeffrey Morgan's avatar
Jeffrey Morgan committed
31
32
33
34
35
### Libraries

- [ollama-python](https://github.com/ollama/ollama-python)
- [ollama-js](https://github.com/ollama/ollama-js)

36
37
## Quickstart

38
To run and chat with [Llama 2](https://ollama.com/library/llama2):
39
40
41
42
43
44
45

```
ollama run llama2
```

## Model library

Jeffrey Morgan's avatar
Jeffrey Morgan committed
46
Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library 'ollama model library')
47

Jeffrey Morgan's avatar
Jeffrey Morgan committed
48
Here are some example models that can be downloaded:
49

50
51
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
52
53
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
54
| Dolphin Phi        | 2.7B       | 1.6GB | `ollama run dolphin-phi`       |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
55
| Phi-2              | 2.7B       | 1.7GB | `ollama run phi`               |
Michael's avatar
Michael committed
56
57
| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`       |
| Starling           | 7B         | 4.1GB | `ollama run starling-lm`       |
58
59
60
61
62
63
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
64
| LLaVA              | 7B         | 4.5GB | `ollama run llava`             |
65

Matt Williams's avatar
Matt Williams committed
66
> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
67

Jeffrey Morgan's avatar
Jeffrey Morgan committed
68
## Customize a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
69

70
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
71

72
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
73

74
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
75

76
77
78
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
79

80
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
81

82
   ```
83
   ollama create example -f Modelfile
84
   ```
85

86
3. Run the model
Michael Yang's avatar
Michael Yang committed
87

88
   ```
89
   ollama run example
90
   ```
Michael Yang's avatar
Michael Yang committed
91

92
93
94
95
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

96
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
97

Jeffrey Morgan's avatar
Jeffrey Morgan committed
98
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
99
100

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
101
ollama pull llama2
102
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
103

104
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
105

Jeffrey Morgan's avatar
Jeffrey Morgan committed
106
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
107
FROM llama2
108
109
110
111

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

112
# set the system message
Jeffrey Morgan's avatar
Jeffrey Morgan committed
113
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
114
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
115
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
116
```
Bruce MacDonald's avatar
Bruce MacDonald committed
117

118
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
119
120

```
121
122
123
124
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
125
126
```

127
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
128
129
130
131
132
133

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
134

Matt Williams's avatar
Matt Williams committed
135
136
137
138
```
ollama create mymodel -f ./Modelfile
```

139
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
140

141
```
142
ollama pull llama2
143
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
144

145
146
147
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
148
149

```
Michael Yang's avatar
Michael Yang committed
150
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
151
152
```

153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
170
171
172
173
174
175
176
### Multimodal models

```
>>> What's in this image? /Users/jmorgan/Desktop/smile.png
The image features a yellow smiley face, which is likely the central focus of the picture.
```

177
178
179
### Pass in prompt as arguments

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
180
$ ollama run llama2 "Summarize this file: $(cat README.md)"
181
182
183
184
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
185

186
187
188
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
189

190
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
191

192
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
193

Jeffrey Morgan's avatar
Jeffrey Morgan committed
194
195
## Building

196
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
197

Jeffrey Morgan's avatar
Jeffrey Morgan committed
198
```
James Braza's avatar
James Braza committed
199
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
200
201
```

202
Then generate dependencies:
203

Jeffrey Morgan's avatar
Jeffrey Morgan committed
204
205
```
go generate ./...
206
```
207

208
Then build the binary:
209

210
```
Michael Yang's avatar
Michael Yang committed
211
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
212
213
```

214
More detailed instructions can be found in the [developer guide](https://github.com/jmorganca/ollama/blob/main/docs/development.md)
215
216

### Running local builds
217

Jeffrey Morgan's avatar
Jeffrey Morgan committed
218
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
219

Jeffrey Morgan's avatar
Jeffrey Morgan committed
220
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
221
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
222
223
```

Michael Yang's avatar
Michael Yang committed
224
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
225
226

```
227
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
228
```
229
230
231

## REST API

James Braza's avatar
James Braza committed
232
Ollama has a REST API for running and managing models.
233
234

### Generate a response
235
236

```
237
curl http://localhost:11434/api/generate -d '{
238
239
240
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
241
```
Nate Sesti's avatar
Nate Sesti committed
242

243
### Chat with a model
Bruce MacDonald's avatar
Bruce MacDonald committed
244
245
246
247

```
curl http://localhost:11434/api/chat -d '{
  "model": "mistral",
248
249
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
Bruce MacDonald's avatar
Bruce MacDonald committed
250
251
252
253
  ]
}'
```

James Braza's avatar
James Braza committed
254
255
See the [API documentation](./docs/api.md) for all endpoints.

256
257
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
258
### Web & Desktop
259

260
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
261
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
262
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
263
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
264
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
265
- [Open WebUI](https://github.com/open-webui/open-webui)
266
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
Enrico Ros's avatar
Enrico Ros committed
267
- [big-AGI](https://github.com/enricoros/big-agi/blob/main/docs/config-ollama.md)
268
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
269
- [Amica](https://github.com/semperai/amica)
270
- [chatd](https://github.com/BruceMacD/chatd)
271
- [Ollama-SwiftUI](https://github.com/kghandour/Ollama-SwiftUI)
272
- [MindMac](https://mindmac.app)
273
- [NextJS Web Interface for Ollama](https://github.com/jakobhoeg/nextjs-ollama-llm-ui)
274

Jeffrey Morgan's avatar
Jeffrey Morgan committed
275
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
276

277
278
279
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
280
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
281
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
282
- [ollama-chat.nvim](https://github.com/gerazov/ollama-chat.nvim)
283
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
284
- [gptel Emacs client](https://github.com/karthink/gptel)
285
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
286
- [cmdh](https://github.com/pgibler/cmdh)
287
- [tenere](https://github.com/pythops/tenere)
288
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
289
- [ShellOracle](https://github.com/djcopley/ShellOracle)
290

Jorge Torres's avatar
Jorge Torres committed
291
292
### Database

293
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md)
294

Matt Williams's avatar
Matt Williams committed
295
### Package managers
296

Matt Williams's avatar
Matt Williams committed
297
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
298
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
299

300
### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
301

302
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
303
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
304
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
305
- [LangChain4j](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-ollama)
306
- [LiteLLM](https://github.com/BerriAI/litellm)
307
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
308
- [Ollama for Ruby](https://github.com/gbaptista/ollama-ai)
309
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
310
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
311
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
312
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
313
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
314
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
315
- [LangChainDart](https://github.com/davidmigloz/langchain_dart)
316
- [Semantic Kernel - Python](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama)
317
- [Haystack](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md)
318
- [Elixir LangChain](https://github.com/brainlid/langchain)
Maximilian Weber's avatar
Maximilian Weber committed
319
- [Ollama for R - rollama](https://github.com/JBGruber/rollama)
320
- [Ollama-ex for Elixir](https://github.com/lebrunel/ollama-ex)
321

Jeffrey Morgan's avatar
Jeffrey Morgan committed
322
323
### Mobile

324
- [Enchanted](https://github.com/AugustDev/enchanted)
Dane Madsen's avatar
Dane Madsen committed
325
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
326

Jeffrey Morgan's avatar
Jeffrey Morgan committed
327
328
### Extensions & Plugins

329
330
331
332
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
333
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
334
335
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
ruecat's avatar
ruecat committed
336
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
337
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
338
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
Steve Korshakov's avatar
Steve Korshakov committed
339
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
340
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
341
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)
342
- [twinny](https://github.com/rjmacarthy/twinny) (Copilot and Copilot chat alternative using Ollama)
343
- [Wingman-AI](https://github.com/RussellCanfield/wingman-ai) (Copilot code and chat alternative using Ollama and HuggingFace)