README.md 8.44 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
20
21
### Windows

Coming soon!

22
23
24
25
26
27
28
29
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

30
### Docker
31

Jeffrey Morgan's avatar
Jeffrey Morgan committed
32
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
33

34
35
## Quickstart

36
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
37
38
39
40
41
42
43

```
ollama run llama2
```

## Model library

44
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
45

46
Here are some example open-source models that can be downloaded:
47

48
49
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
50
51
| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`       |
| Starling           | 7B         | 4.1GB | `ollama run starling-lm`       |
Michael's avatar
Michael committed
52
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
53
54
55
56
57
58
59
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
60

Jeffrey Morgan's avatar
Jeffrey Morgan committed
61
62
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

63
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
64

65
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
66

67
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
68

69
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
70

71
72
73
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
74

75
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
76

77
   ```
78
   ollama create example -f Modelfile
79
   ```
80

81
3. Run the model
Michael Yang's avatar
Michael Yang committed
82

83
   ```
84
   ollama run example
85
   ```
Michael Yang's avatar
Michael Yang committed
86

87
88
89
90
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

91
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
92

Jeffrey Morgan's avatar
Jeffrey Morgan committed
93
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
94
95

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
96
ollama pull llama2
97
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
98

99
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
100

Jeffrey Morgan's avatar
Jeffrey Morgan committed
101
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
102
FROM llama2
103
104
105
106
107

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
Jeffrey Morgan's avatar
Jeffrey Morgan committed
108
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
109
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
110
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
111
```
Bruce MacDonald's avatar
Bruce MacDonald committed
112

113
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
114
115

```
116
117
118
119
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
120
121
```

122
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
123
124
125
126
127
128

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
129

130
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
131

132
```
133
ollama pull llama2
134
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
135

136
137
138
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
139
140

```
Michael Yang's avatar
Michael Yang committed
141
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
142
143
```

144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

### Pass in prompt as arguments

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
164
$ ollama run llama2 "Summarize this file: $(cat README.md)"
165
166
167
168
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
169

170
171
172
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
173

174
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
175

176
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
177

Jeffrey Morgan's avatar
Jeffrey Morgan committed
178
179
## Building

180
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
181

Jeffrey Morgan's avatar
Jeffrey Morgan committed
182
```
James Braza's avatar
James Braza committed
183
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
184
185
186
187
188
189
```

Then generate dependencies and build:

```
go generate ./...
Michael Yang's avatar
Michael Yang committed
190
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
191
192
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
193
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
194

Jeffrey Morgan's avatar
Jeffrey Morgan committed
195
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
196
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
197
198
```

Michael Yang's avatar
Michael Yang committed
199
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
200
201

```
202
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
203
```
204
205
206

## REST API

James Braza's avatar
James Braza committed
207
Ollama has a REST API for running and managing models.
208
209

### Generate a response
210
211

```
212
curl http://localhost:11434/api/generate -d '{
213
214
215
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
216
```
Nate Sesti's avatar
Nate Sesti committed
217

218
### Chat with a model
Bruce MacDonald's avatar
Bruce MacDonald committed
219
220
221
222

```
curl http://localhost:11434/api/chat -d '{
  "model": "mistral",
223
224
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
Bruce MacDonald's avatar
Bruce MacDonald committed
225
226
227
228
  ]
}'
```

James Braza's avatar
James Braza committed
229
230
See the [API documentation](./docs/api.md) for all endpoints.

231
232
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
233
### Web & Desktop
Jeffrey Morgan's avatar
Jeffrey Morgan committed
234

235
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
236
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
237
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
238
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
239
- [Web UI](https://github.com/ollama-webui/ollama-webui)
240
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
Enrico Ros's avatar
Enrico Ros committed
241
- [big-AGI](https://github.com/enricoros/big-agi/blob/main/docs/config-ollama.md)
242
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
243
- [Amica](https://github.com/semperai/amica)
244
- [chatd](https://github.com/BruceMacD/chatd)
245

Jeffrey Morgan's avatar
Jeffrey Morgan committed
246
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
247

248
249
250
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
251
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
252
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
253
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
254
- [gptel Emacs client](https://github.com/karthink/gptel)
255
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
256

Matt Williams's avatar
Matt Williams committed
257
### Package managers
258

Matt Williams's avatar
Matt Williams committed
259
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
260

261
### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
262

263
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
264
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
265
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
266
- [LiteLLM](https://github.com/BerriAI/litellm)
267
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
268
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
269
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
270
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
271
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
272
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
273
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
274

Jeffrey Morgan's avatar
Jeffrey Morgan committed
275
276
277
278
### Mobile

- [Maid](https://github.com/danemadsen/Maid) (Mobile Artificial Intelligence Distribution)

Jeffrey Morgan's avatar
Jeffrey Morgan committed
279
280
### Extensions & Plugins

281
282
283
284
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
285
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
286
287
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
ruecat's avatar
ruecat committed
288
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
289
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
290
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
Steve Korshakov's avatar
Steve Korshakov committed
291
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
292
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)