README.md 6.74 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
20
21
### Windows

Coming soon!

22
23
24
25
26
27
28
29
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

30
### Docker
31

Jeffrey Morgan's avatar
Jeffrey Morgan committed
32
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
33

34
35
## Quickstart

36
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
37
38
39
40
41
42
43

```
ollama run llama2
```

## Model library

44
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
45

46
Here are some example open-source models that can be downloaded:
47

48
49
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
50
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
51
52
53
54
55
56
57
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
58

Jeffrey Morgan's avatar
Jeffrey Morgan committed
59
60
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

61
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
62

63
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
64

65
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
66

67
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
68

69
70
71
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
72

73
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
74

75
   ```
76
   ollama create example -f Modelfile
77
   ```
78

79
3. Run the model
Michael Yang's avatar
Michael Yang committed
80

81
   ```
82
   ollama run example
83
   ```
Michael Yang's avatar
Michael Yang committed
84

85
86
87
88
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

89
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
90

Jeffrey Morgan's avatar
Jeffrey Morgan committed
91
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
92
93

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
94
ollama pull llama2
95
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
96

97
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
98

Jeffrey Morgan's avatar
Jeffrey Morgan committed
99
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
100
FROM llama2
101
102
103
104
105

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
Jeffrey Morgan's avatar
Jeffrey Morgan committed
106
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
107
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
108
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
109
```
Bruce MacDonald's avatar
Bruce MacDonald committed
110

111
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
112
113

```
114
115
116
117
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
118
119
```

120
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
121
122
123
124
125
126

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
127

128
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
129

130
```
131
ollama pull llama2
132
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
133

134
135
136
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
137
138

```
Michael Yang's avatar
Michael Yang committed
139
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
140
141
```

142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

### Pass in prompt as arguments

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
162
$ ollama run llama2 "Summarize this file: $(cat README.md)"
163
164
165
166
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
167

168
169
170
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
171

172
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
173

174
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
175

Jeffrey Morgan's avatar
Jeffrey Morgan committed
176
177
## Building

178
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
179

Jeffrey Morgan's avatar
Jeffrey Morgan committed
180
```
James Braza's avatar
James Braza committed
181
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
182
183
184
185
186
187
```

Then generate dependencies and build:

```
go generate ./...
Michael Yang's avatar
Michael Yang committed
188
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
189
190
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
191
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
192

Jeffrey Morgan's avatar
Jeffrey Morgan committed
193
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
194
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
195
196
```

Michael Yang's avatar
Michael Yang committed
197
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
198
199

```
200
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
201
```
202
203
204

## REST API

James Braza's avatar
James Braza committed
205
206
Ollama has a REST API for running and managing models.
For example, to generate text from a model:
207
208

```
209
210
211
212
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
213
```
Nate Sesti's avatar
Nate Sesti committed
214

James Braza's avatar
James Braza committed
215
216
See the [API documentation](./docs/api.md) for all endpoints.

217
218
## Community Integrations

Jeffrey Morgan's avatar
Jeffrey Morgan committed
219
### Web & Desktop
Jeffrey Morgan's avatar
Jeffrey Morgan committed
220

221
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
222
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
223
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
224
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
225
226
- [Web UI](https://github.com/ollama-webui/ollama-webui)

Jeffrey Morgan's avatar
Jeffrey Morgan committed
227
### Terminal
Jeffrey Morgan's avatar
Jeffrey Morgan committed
228

229
230
231
- [oterm](https://github.com/ggozad/oterm)
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [Emacs client](https://github.com/zweifisch/ollama)
232
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
233
234

### Libraries
Jeffrey Morgan's avatar
Jeffrey Morgan committed
235

236
237
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
238
- [LiteLLM](https://github.com/BerriAI/litellm)
239
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
240
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
241
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
242

Jeffrey Morgan's avatar
Jeffrey Morgan committed
243
244
### Extensions & Plugins

245
246
247
248
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
249
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
250
251
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
252
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)