README.md 7.49 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
20
21
### Windows

Coming soon!

22
23
24
25
26
27
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

28
29
30
31
32
33
34
### Arch-Linux

Ollama is available in the `extra` repository on Arch-Linux. This command is supposed to be run as root.
```
# pacman -S ollama
```

35
36
[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

37
### Docker
38

Jeffrey Morgan's avatar
Jeffrey Morgan committed
39
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
40

41
42
## Quickstart

43
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
44
45
46
47
48
49
50

```
ollama run llama2
```

## Model library

51
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
52

53
Here are some example open-source models that can be downloaded:
54

55
56
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
57
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
58
59
60
61
62
63
64
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
65

Jeffrey Morgan's avatar
Jeffrey Morgan committed
66
67
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

68
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
69

70
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
71

72
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
73

74
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
75

76
77
78
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
79

80
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
81

82
   ```
83
   ollama create example -f Modelfile
84
   ```
85

86
3. Run the model
Michael Yang's avatar
Michael Yang committed
87

88
   ```
89
   ollama run example
90
   ```
Michael Yang's avatar
Michael Yang committed
91

92
93
94
95
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

96
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
97

Jeffrey Morgan's avatar
Jeffrey Morgan committed
98
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
99
100

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
101
ollama pull llama2
102
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
103

104
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
105

Jeffrey Morgan's avatar
Jeffrey Morgan committed
106
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
107
FROM llama2
108
109
110
111
112

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
Jeffrey Morgan's avatar
Jeffrey Morgan committed
113
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
114
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
115
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
116
```
Bruce MacDonald's avatar
Bruce MacDonald committed
117

118
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
119
120

```
121
122
123
124
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
125
126
```

127
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
128
129
130
131
132
133

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
134

135
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
136

137
```
138
ollama pull llama2
139
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
140

141
142
143
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
144
145

```
Michael Yang's avatar
Michael Yang committed
146
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
147
148
```

149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

### Pass in prompt as arguments

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
169
$ ollama run llama2 "Summarize this file: $(cat README.md)"
170
171
172
173
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
174

175
176
177
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
178

179
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
180

181
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
182

Jeffrey Morgan's avatar
Jeffrey Morgan committed
183
184
## Building

185
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
186

Jeffrey Morgan's avatar
Jeffrey Morgan committed
187
```
James Braza's avatar
James Braza committed
188
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
189
190
191
192
193
194
```

Then generate dependencies and build:

```
go generate ./...
Michael Yang's avatar
Michael Yang committed
195
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
196
197
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
198
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
199

Jeffrey Morgan's avatar
Jeffrey Morgan committed
200
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
201
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
202
203
```

Michael Yang's avatar
Michael Yang committed
204
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
205
206

```
207
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
208
```
209
210
211

## REST API

James Braza's avatar
James Braza committed
212
213
Ollama has a REST API for running and managing models.
For example, to generate text from a model:
214
215

```
216
curl http://localhost:11434/api/generate -d '{
217
218
219
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
220
```
Nate Sesti's avatar
Nate Sesti committed
221

James Braza's avatar
James Braza committed
222
223
See the [API documentation](./docs/api.md) for all endpoints.

224
225
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
226
### Web & Desktop
Jeffrey Morgan's avatar
Jeffrey Morgan committed
227

228
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
229
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
230
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
231
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
232
- [Web UI](https://github.com/ollama-webui/ollama-webui)
233
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
Enrico Ros's avatar
Enrico Ros committed
234
- [big-AGI](https://github.com/enricoros/big-agi/blob/main/docs/config-ollama.md)
235
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
236

Jeffrey Morgan's avatar
Jeffrey Morgan committed
237
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
238

239
240
241
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
242
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
243
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
Bruce MacDonald's avatar
Bruce MacDonald committed
244
- [gptel Emacs client](https://github.com/karthink/gptel)
245
246

### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
247

248
249
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
250
- [LiteLLM](https://github.com/BerriAI/litellm)
251
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
252
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
253
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
254
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
255
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
256
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
257

Jeffrey Morgan's avatar
Jeffrey Morgan committed
258
259
260
261
### Mobile

- [Maid](https://github.com/danemadsen/Maid) (Mobile Artificial Intelligence Distribution)

Jeffrey Morgan's avatar
Jeffrey Morgan committed
262
263
### Extensions & Plugins

264
265
266
267
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
268
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
269
270
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
271
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)