README.md 9.34 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
### Windows

20
Coming soon! For now, you can install Ollama on Windows via WSL2.
21

22
23
24
25
26
27
28
29
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

30
### Docker
31

Jeffrey Morgan's avatar
Jeffrey Morgan committed
32
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
33

34
35
## Quickstart

36
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
37
38
39
40
41
42
43

```
ollama run llama2
```

## Model library

44
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
45

46
Here are some example open-source models that can be downloaded:
47

48
49
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
50
51
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
52
| Phi-2              | 2.7B       | 1.7GB | `ollama run phi`               |
Michael's avatar
Michael committed
53
54
| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`       |
| Starling           | 7B         | 4.1GB | `ollama run starling-lm`       |
55
56
57
58
59
60
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
Jeffrey Morgan's avatar
Jeffrey Morgan committed
61
| LLaVA              | 7B         | 4.5GB | `ollama run llava`             |
62

Matt Williams's avatar
Matt Williams committed
63
> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
64

65
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
66

67
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
68

69
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
70

71
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
72

73
74
75
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
76

77
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
78

79
   ```
80
   ollama create example -f Modelfile
81
   ```
82

83
3. Run the model
Michael Yang's avatar
Michael Yang committed
84

85
   ```
86
   ollama run example
87
   ```
Michael Yang's avatar
Michael Yang committed
88

89
90
91
92
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

93
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
94

Jeffrey Morgan's avatar
Jeffrey Morgan committed
95
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
96
97

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
98
ollama pull llama2
99
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
100

101
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
102

Jeffrey Morgan's avatar
Jeffrey Morgan committed
103
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
104
FROM llama2
105
106
107
108

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

109
# set the system message
Jeffrey Morgan's avatar
Jeffrey Morgan committed
110
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
111
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
112
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
113
```
Bruce MacDonald's avatar
Bruce MacDonald committed
114

115
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
116
117

```
118
119
120
121
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
122
123
```

124
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
125
126
127
128
129
130

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
131

Matt Williams's avatar
Matt Williams committed
132
133
134
135
```
ollama create mymodel -f ./Modelfile
```

136
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
137

138
```
139
ollama pull llama2
140
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
141

142
143
144
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
145
146

```
Michael Yang's avatar
Michael Yang committed
147
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
148
149
```

150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
167
168
169
170
171
172
173
### Multimodal models

```
>>> What's in this image? /Users/jmorgan/Desktop/smile.png
The image features a yellow smiley face, which is likely the central focus of the picture.
```

174
175
176
### Pass in prompt as arguments

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
177
$ ollama run llama2 "Summarize this file: $(cat README.md)"
178
179
180
181
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
182

183
184
185
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
186

187
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
188

189
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
190

Jeffrey Morgan's avatar
Jeffrey Morgan committed
191
192
## Building

193
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
194

Jeffrey Morgan's avatar
Jeffrey Morgan committed
195
```
James Braza's avatar
James Braza committed
196
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
197
198
```

199
Then generate dependencies:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
200
201
```
go generate ./...
202
203
204
```
Then build the binary:
```
Michael Yang's avatar
Michael Yang committed
205
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
206
207
```

208
More detailed instructions can be found in the [developer guide](https://github.com/jmorganca/ollama/blob/main/docs/development.md)
209

210

211
### Running local builds
Jeffrey Morgan's avatar
Jeffrey Morgan committed
212
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
213

Jeffrey Morgan's avatar
Jeffrey Morgan committed
214
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
215
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
216
217
```

Michael Yang's avatar
Michael Yang committed
218
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
219
220

```
221
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
222
```
223
224
225

## REST API

James Braza's avatar
James Braza committed
226
Ollama has a REST API for running and managing models.
227
228

### Generate a response
229
230

```
231
curl http://localhost:11434/api/generate -d '{
232
233
234
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
235
```
Nate Sesti's avatar
Nate Sesti committed
236

237
### Chat with a model
Bruce MacDonald's avatar
Bruce MacDonald committed
238
239
240
241

```
curl http://localhost:11434/api/chat -d '{
  "model": "mistral",
242
243
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
Bruce MacDonald's avatar
Bruce MacDonald committed
244
245
246
247
  ]
}'
```

James Braza's avatar
James Braza committed
248
249
See the [API documentation](./docs/api.md) for all endpoints.

250
251
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
252
### Web & Desktop
253
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
254
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
255
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
256
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
257
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
258
- [Web UI](https://github.com/ollama-webui/ollama-webui)
259
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
Enrico Ros's avatar
Enrico Ros committed
260
- [big-AGI](https://github.com/enricoros/big-agi/blob/main/docs/config-ollama.md)
261
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
262
- [Amica](https://github.com/semperai/amica)
263
- [chatd](https://github.com/BruceMacD/chatd)
264

Jeffrey Morgan's avatar
Jeffrey Morgan committed
265
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
266

267
268
269
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
270
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
271
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
272
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
273
- [gptel Emacs client](https://github.com/karthink/gptel)
274
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
275
- [cmdh](https://github.com/pgibler/cmdh)
276

Jorge Torres's avatar
Jorge Torres committed
277
278
### Database

279
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md)
280

Matt Williams's avatar
Matt Williams committed
281
### Package managers
282

Matt Williams's avatar
Matt Williams committed
283
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
284

285
### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
286

287
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
288
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
289
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
290
- [LiteLLM](https://github.com/BerriAI/litellm)
291
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
292
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
293
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
294
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
295
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
296
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
297
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
298
- [LangChainDart](https://github.com/davidmigloz/langchain_dart)
299

Jeffrey Morgan's avatar
Jeffrey Morgan committed
300
301
### Mobile

302
303
- [Enchanted](https://github.com/AugustDev/enchanted)
- [Maid](https://github.com/danemadsen/Maid)
Jeffrey Morgan's avatar
Jeffrey Morgan committed
304

Jeffrey Morgan's avatar
Jeffrey Morgan committed
305
306
### Extensions & Plugins

307
308
309
310
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
311
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
312
313
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
ruecat's avatar
ruecat committed
314
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
315
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
316
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
Steve Korshakov's avatar
Steve Korshakov committed
317
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
318
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)