README.md 6.16 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip)
17

18
19
20
21
### Windows

Coming soon!

22
23
24
25
26
27
28
29
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

30
### Docker
31

32
33
```
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
34
docker exec -it ollama ollama run llama2
35
36
37
```

For GPU support, use `--gpus=all`. See the Docker [image](https://hub.docker.com/r/ollama/ollama) for more information.
38

39
40
## Quickstart

41
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
42
43
44
45
46
47
48

```
ollama run llama2
```

## Model library

49
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
50

51
Here are some example open-source models that can be downloaded:
52

53
54
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
55
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
56
57
58
59
60
61
62
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
63

Jeffrey Morgan's avatar
Jeffrey Morgan committed
64
65
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

66
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
67

68
### Import from GGUF
Michael Yang's avatar
Michael Yang committed
69

70
Ollama supports importing GGUF models in the Modelfile:
Michael Yang's avatar
Michael Yang committed
71

72
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
73

74
75
76
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
77

78
2. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
79

80
   ```
81
   ollama create example -f Modelfile
82
   ```
83

84
3. Run the model
Michael Yang's avatar
Michael Yang committed
85

86
   ```
87
   ollama run example
88
   ```
Michael Yang's avatar
Michael Yang committed
89

90
91
92
93
### Import from PyTorch or Safetensors

See the [guide](docs/import.md) on importing models for more information.

94
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
95

96
Models from the Ollama library can be customized with a prompt. The example
97
98

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
99
ollama pull llama2
100
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
101

102
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
103

Jeffrey Morgan's avatar
Jeffrey Morgan committed
104
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
105
FROM llama2
106
107
108
109
110

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
Jeffrey Morgan's avatar
Jeffrey Morgan committed
111
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
112
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
113
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
114
```
Bruce MacDonald's avatar
Bruce MacDonald committed
115

116
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
117
118

```
119
120
121
122
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
123
124
```

125
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
126
127
128
129
130
131

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
132

133
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
134

135
```
136
ollama pull llama2
137
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
138

139
140
141
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
142
143

```
Michael Yang's avatar
Michael Yang committed
144
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
145
146
```

147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

### Pass in prompt as arguments

```
$ ollama run llama2 "summarize this file:" "$(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
172

173
174
175
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
176

177
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
178

179
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
180

Jeffrey Morgan's avatar
Jeffrey Morgan committed
181
182
## Building

183
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
184

Jeffrey Morgan's avatar
Jeffrey Morgan committed
185
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
186
brew install cmake
187
brew install go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
188
189
190
191
192
193
```

Then generate dependencies and build:

```
go generate ./...
Michael Yang's avatar
Michael Yang committed
194
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
195
196
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
197
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
198

Jeffrey Morgan's avatar
Jeffrey Morgan committed
199
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
200
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
201
202
```

Michael Yang's avatar
Michael Yang committed
203
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
204
205

```
206
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
207
```
208
209
210

## REST API

211
> See the [API documentation](docs/api.md) for all endpoints.
212

213
Ollama has an API for running and managing models. For example to generate text from a model:
214
215

```
216
217
218
219
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
220
```
Nate Sesti's avatar
Nate Sesti committed
221

222
223
224
225
226
227
228
229
230
231
232
## Community Integrations

- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [LiteLLM](https://github.com/BerriAI/litellm)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
233
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
234
235
236
237
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
- [Dumbar](https://github.com/JerrySievert/Dumbar)
- [Emacs client](https://github.com/zweifisch/ollama)