README.md 5.99 KB
Newer Older
Michael Chiang's avatar
Michael Chiang committed
1
2
<div align="center">
  <picture>
Michael Chiang's avatar
Michael Chiang committed
3
4
    <source media="(prefers-color-scheme: dark)" height="200px" srcset="https://github.com/jmorganca/ollama/assets/3325447/56ea1849-1284-4645-8970-956de6e51c3c">
    <img alt="logo" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
Michael Chiang's avatar
Michael Chiang committed
5
6
  </picture>
</div>
Jeffrey Morgan's avatar
Jeffrey Morgan committed
7

Bruce MacDonald's avatar
Bruce MacDonald committed
8
# Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
9

10
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
11

12
Get up and running with large language models locally.
13

14
### macOS
Jeffrey Morgan's avatar
Jeffrey Morgan committed
15

16
[Download](https://ollama.ai/download/Ollama-darwin.zip) 
17

18
19
20
21
22
23
24
25
26
27
28
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)

### Windows 

coming soon
29

30
31
## Quickstart

32
To run and chat with [Llama 2](https://ollama.ai/library/llama2):
33
34
35
36
37
38
39

```
ollama run llama2
```

## Model library

40
Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library "ollama model library")
41

42
Here are some example open-source models that can be downloaded:
43

44
45
| Model              | Parameters | Size  | Download                       |
| ------------------ | ---------- | ----- | ------------------------------ |
Michael's avatar
Michael committed
46
| Mistral            | 7B         | 4.1GB | `ollama run mistral`           |
47
48
49
50
51
52
53
| Llama 2            | 7B         | 3.8GB | `ollama run llama2`            |
| Code Llama         | 7B         | 3.8GB | `ollama run codellama`         |
| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B        | 13B        | 7.3GB | `ollama run llama2:13b`        |
| Llama 2 70B        | 70B        | 39GB  | `ollama run llama2:70b`        |
| Orca Mini          | 3B         | 1.9GB | `ollama run orca-mini`         |
| Vicuna             | 7B         | 3.8GB | `ollama run vicuna`            |
54

Jeffrey Morgan's avatar
Jeffrey Morgan committed
55
56
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

57
## Customize your own model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
58

59
### Import from GGUF or GGML
Michael Yang's avatar
Michael Yang committed
60

61
Ollama supports importing GGUF and GGML file formats in the Modelfile. This means if you have a model that is not in the Ollama library, you can create it, iterate on it, and upload it to the Ollama library to share with others when you are ready.
Michael Yang's avatar
Michael Yang committed
62

63
1. Create a file named Modelfile, and add a `FROM` instruction with the local filepath to the model you want to import.
Michael Yang's avatar
Michael Yang committed
64

65
66
67
   ```
   FROM ./vicuna-33b.Q4_0.gguf
   ```
68

69
3. Create the model in Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
70

71
72
73
   ```
   ollama create name -f path_to_modelfile
   ```
74

75
5. Run the model
Michael Yang's avatar
Michael Yang committed
76

77
78
79
   ```
   ollama run name
   ```
Michael Yang's avatar
Michael Yang committed
80

81
### Customize a prompt
Michael Yang's avatar
Michael Yang committed
82

83
Models from the Ollama library can be customized with a prompt. The example
84
85

```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
86
ollama pull llama2
87
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
88

89
Create a `Modelfile`:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
90

Jeffrey Morgan's avatar
Jeffrey Morgan committed
91
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
92
FROM llama2
93
94
95
96
97

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
Jeffrey Morgan's avatar
Jeffrey Morgan committed
98
SYSTEM """
Jeffrey Morgan's avatar
Jeffrey Morgan committed
99
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
100
"""
Jeffrey Morgan's avatar
Jeffrey Morgan committed
101
```
Bruce MacDonald's avatar
Bruce MacDonald committed
102

103
Next, create and run the model:
Bruce MacDonald's avatar
Bruce MacDonald committed
104
105

```
106
107
108
109
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Bruce MacDonald's avatar
Bruce MacDonald committed
110
111
```

112
113
114
115
116
117
118
For more examples, see the [examples](./examples) directory. For more information on working with a Modelfile, see the [Modelfile](./docs/modelfile.md) documentation.

## CLI Reference

### Create a model

`ollama create` is used to create a model from a Modelfile.
119

120
### Pull a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
121

122
```
123
ollama pull llama2
124
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
125

126
127
128
> This command can also be used to update a local model. Only the diff will be pulled.

### Remove a model
Jeffrey Morgan's avatar
Jeffrey Morgan committed
129
130

```
Michael Yang's avatar
Michael Yang committed
131
ollama rm llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
132
133
```

134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
### Copy a model

```
ollama cp llama2 my-llama2
```

### Multiline input

For multiline input, you can wrap text with `"""`:

```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```

### Pass in prompt as arguments

```
$ ollama run llama2 "summarize this file:" "$(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

### List models on your computer
Jeffrey Morgan's avatar
Jeffrey Morgan committed
159

160
161
162
```
ollama list
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
163

164
### Start Ollama
Jeffrey Morgan's avatar
Jeffrey Morgan committed
165

166
`ollama serve` is used when you want to start ollama without running the desktop application.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
167

Jeffrey Morgan's avatar
Jeffrey Morgan committed
168
169
## Building

170
Install `cmake` and `go`:
Michael Yang's avatar
Michael Yang committed
171

Jeffrey Morgan's avatar
Jeffrey Morgan committed
172
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
173
brew install cmake
174
brew install go
Jeffrey Morgan's avatar
Jeffrey Morgan committed
175
176
177
178
179
180
```

Then generate dependencies and build:

```
go generate ./...
Michael Yang's avatar
Michael Yang committed
181
go build .
Jeffrey Morgan's avatar
Jeffrey Morgan committed
182
183
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
184
Next, start the server:
Bruce MacDonald's avatar
Bruce MacDonald committed
185

Jeffrey Morgan's avatar
Jeffrey Morgan committed
186
```
Jeffrey Morgan's avatar
Jeffrey Morgan committed
187
./ollama serve
Jeffrey Morgan's avatar
Jeffrey Morgan committed
188
189
```

Michael Yang's avatar
Michael Yang committed
190
Finally, in a separate shell, run a model:
Jeffrey Morgan's avatar
Jeffrey Morgan committed
191
192

```
193
./ollama run llama2
Jeffrey Morgan's avatar
Jeffrey Morgan committed
194
```
195
196
197

## REST API

198
> See the [API documentation](./docs/api.md) for all endpoints.
199

200
Ollama has an API for running and managing models. For example to generate text from a model:
201
202

```
203
204
205
206
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'
207
```
Nate Sesti's avatar
Nate Sesti committed
208

209
210
211
212
213
214
215
216
217
218
219
## Community Integrations

- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
- [Continue](https://github.com/continuedev/continue)
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
- [LiteLLM](https://github.com/BerriAI/litellm)
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
220
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
221
222
223
224
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
- [Dumbar](https://github.com/JerrySievert/Dumbar)
- [Emacs client](https://github.com/zweifisch/ollama)