README.md 6.31 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
20
21
### Windows

Coming soon!

22
23
24
25
26
27
28
29
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

30
### Docker
31

James Braza's avatar
James Braza committed
32
33
The official [Ollama Docker image `ollama/ollama`](https://hub.docker.com/r/ollama/ollama)
is available on Docker Hub.
34

35
36
## Quickstart

37
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
38
39
40
41
42
43
44

```
ollama run llama2
```

## Model library

45
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
46

47
Here are some example open-source models that can be downloaded:
48

49
50
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
51
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
52
53
54
55
56
57
58
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
59

Jeffrey Morgan's avatar
Jeffrey Morgan committed
60
61
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

62
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
63

64
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
65

66
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
67

68
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
69

70
71
72
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
73

74
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
75

76
   ```
77
   ollama create example -f Modelfile
78
   ```
79

80
3. Run the model
Michael Yang's avatar
Michael Yang committed
81

82
   ```
83
   ollama run example
84
   ```
Michael Yang's avatar
Michael Yang committed
85

86
87
88
89
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

90
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
91

Jeffrey Morgan's avatar
Jeffrey Morgan committed
92
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
93
94

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
95
ollama pull llama2
96
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
97

98
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
99

Jeffrey Morgan's avatar
Jeffrey Morgan committed
100
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
101
FROM llama2
102
103
104
105
106

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
Jeffrey Morgan's avatar
Jeffrey Morgan committed
107
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
108
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
109
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
110
```
Bruce MacDonald's avatar
Bruce MacDonald committed
111

112
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
113
114

```
115
116
117
118
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
119
120
```

121
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
122
123
124
125
126
127

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
128

129
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
130

131
```
132
ollama pull llama2
133
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
134

135
136
137
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
138
139

```
Michael Yang's avatar
Michael Yang committed
140
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
141
142
```

143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

### Pass in prompt as arguments

```
$ ollama run llama2 "summarize this file:" "$(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
168

169
170
171
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
172

173
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
174

175
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
176

Jeffrey Morgan's avatar
Jeffrey Morgan committed
177
178
## Building

179
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
180

Jeffrey Morgan's avatar
Jeffrey Morgan committed
181
```
James Braza's avatar
James Braza committed
182
brew install cmake go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
183
184
185
186
187
188
```

Then generate dependencies and build:

```
go generate ./...
Michael Yang's avatar
Michael Yang committed
189
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
190
191
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
192
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
193

Jeffrey Morgan's avatar
Jeffrey Morgan committed
194
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
195
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
196
197
```

Michael Yang's avatar
Michael Yang committed
198
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
199
200

```
201
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
202
```
203
204
205

## REST API

James Braza's avatar
James Braza committed
206
207
Ollama has a REST API for running and managing models.
For example, to generate text from a model:
208
209

```
210
211
212
213
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
214
```
Nate Sesti's avatar
Nate Sesti committed
215

James Braza's avatar
James Braza committed
216
217
See the [API documentation](./docs/api.md) for all endpoints.

218
219
220
221
222
223
224
225
226
227
228
## Community Integrations

- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [LiteLLM](https://github.com/BerriAI/litellm)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
229
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
230
231
232
233
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
- [Dumbar](https://github.com/JerrySievert/Dumbar)
- [Emacs client](https://github.com/zweifisch/ollama)
234
- [oterm](https://github.com/ggozad/oterm)
235
236
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
237
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)