"vscode:/vscode.git/clone" did not exist on "4486f78cf92bcca23c0d2c37bf13f6e06f6c7ad0"
api.md 34.8 KB
Newer Older
1
2
# API

3
4
5
## Endpoints

- [Generate a completion](#generate-a-completion)
6
- [Generate a chat completion](#generate-a-chat-completion)
Matt Williams's avatar
Matt Williams committed
7
8
9
10
11
12
13
14
- [Create a Model](#create-a-model)
- [List Local Models](#list-local-models)
- [Show Model Information](#show-model-information)
- [Copy a Model](#copy-a-model)
- [Delete a Model](#delete-a-model)
- [Pull a Model](#pull-a-model)
- [Push a Model](#push-a-model)
- [Generate Embeddings](#generate-embeddings)
royjhan's avatar
royjhan committed
15
- [List Running Models](#list-running-models)
Matt Williams's avatar
Matt Williams committed
16

17
## Conventions
Matt Williams's avatar
Matt Williams committed
18

19
### Model names
Matt Williams's avatar
Matt Williams committed
20

21
Model names follow a `model:tag` format, where `model` can have an optional namespace such as `example/model`. Some examples are `orca-mini:3b-q4_1` and `llama3:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.
Matt Williams's avatar
Matt Williams committed
22
23
24

### Durations

25
All durations are returned in nanoseconds.
Matt Williams's avatar
Matt Williams committed
26

27
28
### Streaming responses

Matt Williams's avatar
Matt Williams committed
29
30
Certain endpoints stream responses as JSON objects and can optional return non-streamed responses.

31
## Generate a completion
Matt Williams's avatar
Matt Williams committed
32

Matt Williams's avatar
Matt Williams committed
33
```shell
34
35
POST /api/generate
```
36

Bruce MacDonald's avatar
Bruce MacDonald committed
37
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
Matt Williams's avatar
Matt Williams committed
38

39
### Parameters
Matt Williams's avatar
Matt Williams committed
40

41
42
- `model`: (required) the [model name](#model-names)
- `prompt`: the prompt to generate a response for
Matt Williams's avatar
Matt Williams committed
43
- `images`: (optional) a list of base64-encoded images (for multimodal models such as `llava`)
Matt Williams's avatar
Matt Williams committed
44

45
Advanced parameters (optional):
Matt Williams's avatar
Matt Williams committed
46

47
- `format`: the format to return a response in. Currently the only accepted value is `json`
48
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
49
- `system`: system message to (overrides what is defined in the `Modelfile`)
50
- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
51
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
52
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
Bruce MacDonald's avatar
Bruce MacDonald committed
53
54
- `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
55

Matt Williams's avatar
Matt Williams committed
56
#### JSON mode
57

Jeffrey Morgan's avatar
Jeffrey Morgan committed
58
Enable JSON mode by setting the `format` parameter to `json`. This will structure the response as a valid JSON object. See the JSON mode [example](#request-json-mode) below.
59
60

> Note: it's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace.
61

62
63
### Examples

Matt Williams's avatar
Matt Williams committed
64
65
66
#### Generate request (Streaming)

##### Request
Matt Williams's avatar
Matt Williams committed
67

Matt Williams's avatar
Matt Williams committed
68
```shell
69
curl http://localhost:11434/api/generate -d '{
70
  "model": "llama3",
71
72
73
  "prompt": "Why is the sky blue?"
}'
```
Matt Williams's avatar
Matt Williams committed
74

Matt Williams's avatar
Matt Williams committed
75
##### Response
Matt Williams's avatar
Matt Williams committed
76

77
A stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
78

79
```json
Matt Williams's avatar
Matt Williams committed
80
{
81
  "model": "llama3",
82
83
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "response": "The",
Matt Williams's avatar
Matt Williams committed
84
85
86
87
  "done": false
}
```

88
The final response in the stream also includes additional data about the generation:
Matt Williams's avatar
Matt Williams committed
89

90
91
92
93
- `total_duration`: time spent generating the response
- `load_duration`: time spent in nanoseconds loading the model
- `prompt_eval_count`: number of tokens in the prompt
- `prompt_eval_duration`: time spent in nanoseconds evaluating the prompt
Sri Siddhaarth's avatar
Sri Siddhaarth committed
94
- `eval_count`: number of tokens in the response
95
- `eval_duration`: time in nanoseconds spent generating the response
96
- `context`: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
97
- `response`: empty if the response was streamed, if not streamed, this will contain the full response
98

Darinka's avatar
Darinka committed
99
To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration` * `10^9`.
100

101
```json
Matt Williams's avatar
Matt Williams committed
102
{
103
  "model": "llama3",
104
  "created_at": "2023-08-04T19:22:45.499127Z",
105
  "response": "",
106
  "done": true,
Matt Williams's avatar
Matt Williams committed
107
  "context": [1, 2, 3],
Jeffrey Morgan's avatar
Jeffrey Morgan committed
108
109
110
111
112
113
  "total_duration": 10706818083,
  "load_duration": 6338219291,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 130079000,
  "eval_count": 259,
  "eval_duration": 4232710000
114
}
Matt Williams's avatar
Matt Williams committed
115
116
```

117
#### Request (No streaming)
118

Matt Williams's avatar
Matt Williams committed
119
120
121
##### Request

A response can be received in one reply when streaming is off.
Bruce MacDonald's avatar
Bruce MacDonald committed
122

123
```shell
124
curl http://localhost:11434/api/generate -d '{
125
  "model": "llama3",
126
127
128
129
130
  "prompt": "Why is the sky blue?",
  "stream": false
}'
```

Matt Williams's avatar
Matt Williams committed
131
##### Response
132

133
134
135
136
If `stream` is set to `false`, the response will be a single JSON object:

```json
{
137
  "model": "llama3",
138
139
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
Matt Williams's avatar
Matt Williams committed
140
  "done": true,
141
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
  "total_duration": 5043500667,
  "load_duration": 5025959,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 325953000,
  "eval_count": 290,
  "eval_duration": 4709213000
}
```

#### Request (JSON mode)

> When `format` is set to `json`, the output will always be a well-formed JSON object. It's important to also instruct the model to respond in JSON.

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
159
  "model": "llama3",
Matt Williams's avatar
Matt Williams committed
160
161
162
163
164
165
166
167
168
169
  "prompt": "What color is the sky at different times of the day? Respond using JSON",
  "format": "json",
  "stream": false
}'
```

##### Response

```json
{
170
  "model": "llama3",
Matt Williams's avatar
Matt Williams committed
171
172
  "created_at": "2023-11-09T21:07:55.186497Z",
  "response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
173
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
174
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
  "total_duration": 4648158584,
  "load_duration": 4071084,
  "prompt_eval_count": 36,
  "prompt_eval_duration": 439038000,
  "eval_count": 180,
  "eval_duration": 4196918000
}
```

The value of `response` will be a string containing JSON similar to:

```json
{
  "morning": {
    "color": "blue"
  },
  "noon": {
    "color": "blue-gray"
  },
  "afternoon": {
    "color": "warm gray"
  },
  "evening": {
    "color": "orange"
  }
200
201
202
}
```

203
204
205
206
#### Request (with images)

To submit images to multimodal models such as `llava` or `bakllava`, provide a list of base64-encoded `images`:

Matt Williams's avatar
Matt Williams committed
207
208
#### Request

209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
```shell
curl http://localhost:11434/api/generate -d '{
  "model": "llava",
  "prompt":"What is in this picture?",
  "stream": false,
  "images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"]
}'
```

#### Response

```
{
  "model": "llava",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": "A happy cartoon character, which is cute and cheerful.",
  "done": true,
Matt Williams's avatar
Matt Williams committed
226
227
228
229
230
231
232
  "context": [1, 2, 3],
  "total_duration": 2938432250,
  "load_duration": 2559292,
  "prompt_eval_count": 1,
  "prompt_eval_duration": 2195557000,
  "eval_count": 44,
  "eval_duration": 736432000
233
234
235
}
```

Bruce MacDonald's avatar
Bruce MacDonald committed
236
#### Request (Raw Mode)
237

Matt Williams's avatar
Matt Williams committed
238
In some cases, you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable templating. Also note that raw mode will not return a context.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
239

Matt Williams's avatar
Matt Williams committed
240
##### Request
241
242

```shell
243
curl http://localhost:11434/api/generate -d '{
244
245
246
247
248
249
250
  "model": "mistral",
  "prompt": "[INST] why is the sky blue? [/INST]",
  "raw": true,
  "stream": false
}'
```

251
252
253
254
255
256
257
258
259
#### Request (Reproducible outputs)

For reproducible outputs, set `temperature` to 0 and `seed` to a number:

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
  "model": "mistral",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
260
  "prompt": "Why is the sky blue?",
261
  "options": {
Jeffrey Morgan's avatar
Jeffrey Morgan committed
262
    "seed": 123,
263
264
265
266
267
    "temperature": 0
  }
}'
```

Matt Williams's avatar
Matt Williams committed
268
##### Response
269
270
271
272
273
274
275

```json
{
  "model": "mistral",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
  "done": true,
Matt Williams's avatar
Matt Williams committed
276
277
  "total_duration": 8493852375,
  "load_duration": 6589624375,
278
  "prompt_eval_count": 14,
Matt Williams's avatar
Matt Williams committed
279
280
281
  "prompt_eval_duration": 119039000,
  "eval_count": 110,
  "eval_duration": 1779061000
282
283
284
}
```

Matt Williams's avatar
Matt Williams committed
285
#### Generate request (With options)
286
287
288

If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.

Matt Williams's avatar
Matt Williams committed
289
290
##### Request

291
```shell
292
curl http://localhost:11434/api/generate -d '{
293
  "model": "llama3",
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
  "prompt": "Why is the sky blue?",
  "stream": false,
  "options": {
    "num_keep": 5,
    "seed": 42,
    "num_predict": 100,
    "top_k": 20,
    "top_p": 0.9,
    "tfs_z": 0.5,
    "typical_p": 0.7,
    "repeat_last_n": 33,
    "temperature": 0.8,
    "repeat_penalty": 1.2,
    "presence_penalty": 1.5,
    "frequency_penalty": 1.0,
    "mirostat": 1,
    "mirostat_tau": 0.8,
    "mirostat_eta": 0.6,
    "penalize_newline": true,
    "stop": ["\n", "user:"],
    "numa": false,
315
    "num_ctx": 1024,
316
317
318
319
320
321
322
323
    "num_batch": 2,
    "num_gpu": 1,
    "main_gpu": 0,
    "low_vram": false,
    "f16_kv": true,
    "vocab_only": false,
    "use_mmap": true,
    "use_mlock": false,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
324
    "num_thread": 8
325
  }
326
327
328
}'
```

Matt Williams's avatar
Matt Williams committed
329
##### Response
330
331
332

```json
{
333
  "model": "llama3",
334
335
336
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
337
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
  "total_duration": 4935886791,
  "load_duration": 534986708,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 107345000,
  "eval_count": 237,
  "eval_duration": 4289432000
}
```

#### Load a model

If an empty prompt is provided, the model will be loaded into memory.

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
355
  "model": "llama3"
Matt Williams's avatar
Matt Williams committed
356
357
358
359
360
361
362
363
364
}'
```

##### Response

A single JSON object is returned:

```json
{
365
  "model": "llama3",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
366
367
368
  "created_at": "2023-12-18T19:52:07.071755Z",
  "response": "",
  "done": true
369
370
371
}
```

372
## Generate a chat completion
373

Bruce MacDonald's avatar
Bruce MacDonald committed
374
375
376
377
```shell
POST /api/chat
```

Matt Williams's avatar
Matt Williams committed
378
Generate the next message in a chat with a provided model. This is a streaming endpoint, so there will be a series of responses. Streaming can be disabled using `"stream": false`. The final response object will include statistics and additional data from the request.
Bruce MacDonald's avatar
Bruce MacDonald committed
379
380
381
382
383
384

### Parameters

- `model`: (required) the [model name](#model-names)
- `messages`: the messages of the chat, this can be used to keep a chat memory

385
386
387
388
389
The `message` object has the following fields:

- `role`: the role of the message, either `system`, `user` or `assistant`
- `content`: the content of the message
- `images` (optional): a list of images to include in the message (for multimodal models such as `llava`)
390

Bruce MacDonald's avatar
Bruce MacDonald committed
391
392
393
394
395
Advanced parameters (optional):

- `format`: the format to return a response in. Currently the only accepted value is `json`
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
Bruce MacDonald's avatar
Bruce MacDonald committed
396
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
Bruce MacDonald's avatar
Bruce MacDonald committed
397
398
399

### Examples

Matt Williams's avatar
Matt Williams committed
400
401
402
#### Chat Request (Streaming)

##### Request
403

Bruce MacDonald's avatar
Bruce MacDonald committed
404
405
406
Send a chat message with a streaming response.

```shell
407
curl http://localhost:11434/api/chat -d '{
408
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
409
410
411
412
413
414
415
416
417
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    }
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
418
##### Response
Bruce MacDonald's avatar
Bruce MacDonald committed
419
420
421
422
423

A stream of JSON objects is returned:

```json
{
424
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
425
426
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "message": {
Robin Glauser's avatar
Robin Glauser committed
427
    "role": "assistant",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
428
    "content": "The",
Matt Williams's avatar
Matt Williams committed
429
    "images": null
Bruce MacDonald's avatar
Bruce MacDonald committed
430
431
432
433
434
435
436
437
438
  },
  "done": false
}
```

Final response:

```json
{
439
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
440
441
  "created_at": "2023-08-04T19:22:45.499127Z",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
442
443
444
445
446
447
  "total_duration": 4883583458,
  "load_duration": 1334875,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 342546000,
  "eval_count": 282,
  "eval_duration": 4535599000
Bruce MacDonald's avatar
Bruce MacDonald committed
448
449
450
}
```

Matt Williams's avatar
Matt Williams committed
451
#### Chat request (No streaming)
452

Matt Williams's avatar
Matt Williams committed
453
454
455
456
##### Request

```shell
curl http://localhost:11434/api/chat -d '{
457
  "model": "llama3",
Matt Williams's avatar
Matt Williams committed
458
459
460
461
462
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    }
Jeffrey Morgan's avatar
Jeffrey Morgan committed
463
  ],
Matt Williams's avatar
Matt Williams committed
464
465
466
467
468
469
470
471
  "stream": false
}'
```

##### Response

```json
{
472
  "model": "registry.ollama.ai/library/llama3:latest",
Matt Williams's avatar
Matt Williams committed
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
  "created_at": "2023-12-12T14:13:43.416799Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How are you today?"
  },
  "done": true,
  "total_duration": 5191566416,
  "load_duration": 2154458,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 383809000,
  "eval_count": 298,
  "eval_duration": 4799921000
}
```

#### Chat request (With History)

Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.

##### Request
Bruce MacDonald's avatar
Bruce MacDonald committed
493
494

```shell
495
curl http://localhost:11434/api/chat -d '{
496
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    },
    {
      "role": "assistant",
      "content": "due to rayleigh scattering."
    },
    {
      "role": "user",
      "content": "how is that different than mie scattering?"
    }
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
514
##### Response
Bruce MacDonald's avatar
Bruce MacDonald committed
515
516
517
518
519

A stream of JSON objects is returned:

```json
{
520
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
521
522
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "message": {
Robin Glauser's avatar
Robin Glauser committed
523
    "role": "assistant",
Bruce MacDonald's avatar
Bruce MacDonald committed
524
525
526
527
528
529
530
531
532
533
    "content": "The"
  },
  "done": false
}
```

Final response:

```json
{
534
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
535
536
  "created_at": "2023-08-04T19:22:45.499127Z",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
537
538
539
540
541
542
  "total_duration": 8113331500,
  "load_duration": 6396458,
  "prompt_eval_count": 61,
  "prompt_eval_duration": 398801000,
  "eval_count": 468,
  "eval_duration": 7701267000
Bruce MacDonald's avatar
Bruce MacDonald committed
543
544
545
}
```

Matt Williams's avatar
Matt Williams committed
546
547
548
#### Chat request (with images)

##### Request
549
550
551
552
553

Send a chat message with a conversation history.

```shell
curl http://localhost:11434/api/chat -d '{
Matt Williams's avatar
Matt Williams committed
554
  "model": "llava",
555
556
557
558
559
  "messages": [
    {
      "role": "user",
      "content": "what is in this image?",
      "images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"]
Jeffrey Morgan's avatar
Jeffrey Morgan committed
560
    }
561
562
563
564
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
565
566
567
568
569
570
571
572
573
574
575
576
##### Response

```json
{
  "model": "llava",
  "created_at": "2023-12-13T22:42:50.203334Z",
  "message": {
    "role": "assistant",
    "content": " The image features a cute, little pig with an angry facial expression. It's wearing a heart on its shirt and is waving in the air. This scene appears to be part of a drawing or sketching project.",
    "images": null
  },
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
577
578
579
580
581
582
  "total_duration": 1668506709,
  "load_duration": 1986209,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 359682000,
  "eval_count": 83,
  "eval_duration": 1303285000
Matt Williams's avatar
Matt Williams committed
583
584
585
}
```

586
587
588
589
590
591
#### Chat request (Reproducible outputs)

##### Request

```shell
curl http://localhost:11434/api/chat -d '{
592
  "model": "llama3",
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "options": {
    "seed": 101,
    "temperature": 0
  }
}'
```

##### Response

```json
{
610
  "model": "registry.ollama.ai/library/llama3:latest",
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
  "created_at": "2023-12-12T14:13:43.416799Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How are you today?"
  },
  "done": true,
  "total_duration": 5191566416,
  "load_duration": 2154458,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 383809000,
  "eval_count": 298,
  "eval_duration": 4799921000
}
```

626
## Create a Model
Matt Williams's avatar
Matt Williams committed
627

Matt Williams's avatar
Matt Williams committed
628
```shell
629
630
631
POST /api/create
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
632
Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation must also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using [Create a Blob](#create-a-blob) and the value to the path indicated in the response.
633

634
### Parameters
Matt Williams's avatar
Matt Williams committed
635

636
- `name`: name of the model to create
Jeffrey Morgan's avatar
Jeffrey Morgan committed
637
- `modelfile` (optional): contents of the Modelfile
638
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Jeffrey Morgan's avatar
Jeffrey Morgan committed
639
- `path` (optional): path to the Modelfile
Matt Williams's avatar
Matt Williams committed
640

641
642
### Examples

Matt Williams's avatar
Matt Williams committed
643
644
645
646
647
#### Create a new model

Create a new model from a `Modelfile`.

##### Request
Matt Williams's avatar
Matt Williams committed
648

Matt Williams's avatar
Matt Williams committed
649
```shell
650
curl http://localhost:11434/api/create -d '{
651
  "name": "mario",
652
  "modelfile": "FROM llama3\nSYSTEM You are mario from Super Mario Bros."
653
}'
Matt Williams's avatar
Matt Williams committed
654
655
```

Matt Williams's avatar
Matt Williams committed
656
##### Response
Matt Williams's avatar
Matt Williams committed
657

Matt Williams's avatar
Matt Williams committed
658
A stream of JSON objects. Notice that the final JSON object shows a `"status": "success"`.
Matt Williams's avatar
Matt Williams committed
659

660
```json
Matt Williams's avatar
Matt Williams committed
661
662
663
664
665
666
667
668
669
670
671
{"status":"reading model metadata"}
{"status":"creating system layer"}
{"status":"using already created layer sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2"}
{"status":"using already created layer sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b"}
{"status":"using already created layer sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d"}
{"status":"using already created layer sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988"}
{"status":"using already created layer sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9"}
{"status":"writing layer sha256:df30045fe90f0d750db82a058109cecd6d4de9c90a3d75b19c09e5f64580bb42"}
{"status":"writing layer sha256:f18a68eb09bf925bb1b669490407c1b1251c5db98dc4d3d81f3088498ea55690"}
{"status":"writing manifest"}
{"status":"success"}
Matt Williams's avatar
Matt Williams committed
672
673
```

Michael Yang's avatar
Michael Yang committed
674
675
676
677
678
679
### Check if a Blob Exists

```shell
HEAD /api/blobs/:digest
```

Matt Williams's avatar
Matt Williams committed
680
681
Ensures that the file blob used for a FROM or ADAPTER field exists on the server. This is checking your Ollama server and not Ollama.ai.

Michael Yang's avatar
Michael Yang committed
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
#### Query Parameters

- `digest`: the SHA256 digest of the blob

#### Examples

##### Request

```shell
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
```

##### Response

Return 200 OK if the blob exists, 404 Not Found if it does not.

Michael Yang's avatar
Michael Yang committed
698
### Create a Blob
Michael Yang's avatar
Michael Yang committed
699
700
701
702
703

```shell
POST /api/blobs/:digest
```

Matt Williams's avatar
Matt Williams committed
704
Create a blob from a file on the server. Returns the server file path.
Michael Yang's avatar
Michael Yang committed
705

Michael Yang's avatar
Michael Yang committed
706
#### Query Parameters
Michael Yang's avatar
Michael Yang committed
707
708
709

- `digest`: the expected SHA256 digest of the file

Michael Yang's avatar
Michael Yang committed
710
#### Examples
Michael Yang's avatar
Michael Yang committed
711

Michael Yang's avatar
Michael Yang committed
712
713
##### Request

Michael Yang's avatar
Michael Yang committed
714
```shell
Michael Yang's avatar
Michael Yang committed
715
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
Michael Yang's avatar
Michael Yang committed
716
717
```

Michael Yang's avatar
Michael Yang committed
718
##### Response
Michael Yang's avatar
Michael Yang committed
719

Matt Williams's avatar
Matt Williams committed
720
Return 201 Created if the blob was successfully created, 400 Bad Request if the digest used is not expected.
Michael Yang's avatar
Michael Yang committed
721

722
## List Local Models
Matt Williams's avatar
Matt Williams committed
723

Matt Williams's avatar
Matt Williams committed
724
```shell
725
GET /api/tags
Matt Williams's avatar
Matt Williams committed
726
727
```

728
List models that are available locally.
Matt Williams's avatar
Matt Williams committed
729

730
731
732
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
733

Matt Williams's avatar
Matt Williams committed
734
```shell
735
736
curl http://localhost:11434/api/tags
```
Matt Williams's avatar
Matt Williams committed
737

738
#### Response
Matt Williams's avatar
Matt Williams committed
739

740
741
A single JSON object will be returned.

742
```json
Matt Williams's avatar
Matt Williams committed
743
744
745
{
  "models": [
    {
Matt Williams's avatar
Matt Williams committed
746
747
748
749
750
751
752
753
754
755
756
      "name": "codellama:13b",
      "modified_at": "2023-11-04T14:56:49.277302595-07:00",
      "size": 7365960935,
      "digest": "9f438cb9cd581fc025612d27f7c1a6669ff83a8bb0ed86c94fcf4c5440555697",
      "details": {
        "format": "gguf",
        "family": "llama",
        "families": null,
        "parameter_size": "13B",
        "quantization_level": "Q4_0"
      }
757
758
    },
    {
759
      "name": "llama3:latest",
Matt Williams's avatar
Matt Williams committed
760
761
762
763
764
765
766
767
768
769
      "modified_at": "2023-12-07T09:32:18.757212583-08:00",
      "size": 3825819519,
      "digest": "fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e",
      "details": {
        "format": "gguf",
        "family": "llama",
        "families": null,
        "parameter_size": "7B",
        "quantization_level": "Q4_0"
      }
Matt Williams's avatar
Matt Williams committed
770
771
    }
  ]
Matt Williams's avatar
Matt Williams committed
772
773
774
}
```

Matt Williams's avatar
Matt Williams committed
775
776
777
778
779
780
## Show Model Information

```shell
POST /api/show
```

781
Show information about a model including details, modelfile, template, parameters, license, and system prompt.
Matt Williams's avatar
Matt Williams committed
782
783
784
785

### Parameters

- `name`: name of the model to show
Matt Williams's avatar
Matt Williams committed
786

787
788
789
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
790

791
```shell
Matt Williams's avatar
Matt Williams committed
792
curl http://localhost:11434/api/show -d '{
793
  "name": "llama3"
Matt Williams's avatar
Matt Williams committed
794
}'
Matt Williams's avatar
Matt Williams committed
795
```
Matt Williams's avatar
Matt Williams committed
796

797
#### Response
Matt Williams's avatar
Matt Williams committed
798
799
800

```json
{
睡觉型学渣's avatar
睡觉型学渣 committed
801
802
803
  "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llava:latest\n\nFROM /Users/matt/.ollama/models/blobs/sha256:200765e1283640ffbd013184bf496e261032fa75b99498a9613be4e94d63ad52\nTEMPLATE \"\"\"{{ .System }}\nUSER: {{ .Prompt }}\nASSISTANT: \"\"\"\nPARAMETER num_ctx 4096\nPARAMETER stop \"\u003c/s\u003e\"\nPARAMETER stop \"USER:\"\nPARAMETER stop \"ASSISTANT:\"",
  "parameters": "num_ctx                        4096\nstop                           \u003c/s\u003e\nstop                           USER:\nstop                           ASSISTANT:",
  "template": "{{ .System }}\nUSER: {{ .Prompt }}\nASSISTANT: ",
804
805
  "details": {
    "format": "gguf",
Matt Williams's avatar
Matt Williams committed
806
    "family": "llama",
807
808
809
810
    "families": ["llama", "clip"],
    "parameter_size": "7B",
    "quantization_level": "Q4_0"
  }
Matt Williams's avatar
Matt Williams committed
811
812
813
814
815
816
}
```

## Copy a Model

```shell
817
POST /api/copy
Matt Williams's avatar
Matt Williams committed
818
```
819

820
Copy a model. Creates a model with another name from an existing model.
Matt Williams's avatar
Matt Williams committed
821

822
823
824
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
825

Matt Williams's avatar
Matt Williams committed
826
```shell
827
curl http://localhost:11434/api/copy -d '{
828
829
  "source": "llama3",
  "destination": "llama3-backup"
Matt Williams's avatar
Matt Williams committed
830
831
832
}'
```

833
#### Response
834

Matt Williams's avatar
Matt Williams committed
835
Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't exist.
836

Matt Williams's avatar
Matt Williams committed
837
## Delete a Model
Matt Williams's avatar
Matt Williams committed
838

Matt Williams's avatar
Matt Williams committed
839
```shell
840
DELETE /api/delete
Matt Williams's avatar
Matt Williams committed
841
842
```

843
Delete a model and its data.
Matt Williams's avatar
Matt Williams committed
844

845
### Parameters
Matt Williams's avatar
Matt Williams committed
846

847
- `name`: model name to delete
Matt Williams's avatar
Matt Williams committed
848

849
850
851
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
852

Matt Williams's avatar
Matt Williams committed
853
```shell
854
curl -X DELETE http://localhost:11434/api/delete -d '{
855
  "name": "llama3:13b"
Matt Williams's avatar
Matt Williams committed
856
857
858
}'
```

859
#### Response
860

Matt Williams's avatar
Matt Williams committed
861
Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't exist.
862

863
## Pull a Model
Matt Williams's avatar
Matt Williams committed
864

Matt Williams's avatar
Matt Williams committed
865
```shell
866
867
868
POST /api/pull
```

Matt Williams's avatar
Matt Williams committed
869
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
Matt Williams's avatar
Matt Williams committed
870

871
### Parameters
Matt Williams's avatar
Matt Williams committed
872

873
- `name`: name of the model to pull
Matt Williams's avatar
Matt Williams committed
874
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
875
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
876

877
878
879
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
880

Matt Williams's avatar
Matt Williams committed
881
```shell
882
curl http://localhost:11434/api/pull -d '{
883
  "name": "llama3"
884
}'
Matt Williams's avatar
Matt Williams committed
885
886
```

887
#### Response
888

889
890
891
892
893
894
895
896
897
898
899
900
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:

The first object is the manifest:

```json
{
  "status": "pulling manifest"
}
```

Then there is a series of downloading responses. Until any of the download is completed, the `completed` key may not be included. The number of files to be downloaded depends on the number of layers specified in the manifest.

901
```json
Matt Williams's avatar
Matt Williams committed
902
{
903
904
  "status": "downloading digestname",
  "digest": "digestname",
905
  "total": 2142590208,
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
  "completed": 241970
}
```

After all the files are downloaded, the final responses are:

```json
{
    "status": "verifying sha256 digest"
}
{
    "status": "writing manifest"
}
{
    "status": "removing any unused layers"
}
{
    "status": "success"
}
```

if `stream` is set to false, then the response is a single JSON object:

```json
{
  "status": "success"
Matt Williams's avatar
Matt Williams committed
932
933
}
```
934

Matt Williams's avatar
Matt Williams committed
935
936
937
938
939
940
941
942
943
944
945
## Push a Model

```shell
POST /api/push
```

Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.

### Parameters

- `name`: name of the model to push in the form of `<namespace>/<model>:<tag>`
946
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
947
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
948

949
950
951
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
952
953

```shell
954
curl http://localhost:11434/api/push -d '{
Matt Williams's avatar
Matt Williams committed
955
956
957
958
  "name": "mattw/pygmalion:latest"
}'
```

959
#### Response
960

961
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
962
963

```json
964
{ "status": "retrieving manifest" }
965
```
Matt Williams's avatar
Matt Williams committed
966
967
968
969
970

and then:

```json
{
971
972
973
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
Matt Williams's avatar
Matt Williams committed
974
975
976
977
978
979
980
}
```

Then there is a series of uploading responses:

```json
{
981
982
983
984
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
}
Matt Williams's avatar
Matt Williams committed
985
986
987
988
989
990
991
992
993
```

Finally, when the upload is complete:

```json
{"status":"pushing manifest"}
{"status":"success"}
```

994
995
996
If `stream` is set to `false`, then the response is a single JSON object:

```json
997
{ "status": "success" }
998
999
```

Matt Williams's avatar
Matt Williams committed
1000
1001
1002
## Generate Embeddings

```shell
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
POST /api/embeddings
```

Generate embeddings from a model

### Parameters

- `model`: name of model to generate embeddings from
- `prompt`: text to generate embeddings for

1013
1014
1015
Advanced parameters:

- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
Bruce MacDonald's avatar
Bruce MacDonald committed
1016
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
1017

1018
1019
1020
### Examples

#### Request
1021

Matt Williams's avatar
Matt Williams committed
1022
```shell
1023
curl http://localhost:11434/api/embeddings -d '{
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1024
  "model": "all-minilm",
1025
1026
1027
1028
  "prompt": "Here is an article about llamas..."
}'
```

1029
#### Response
1030
1031
1032

```json
{
Alexander F. Rødseth's avatar
Alexander F. Rødseth committed
1033
  "embedding": [
1034
1035
1036
    0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
    0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
  ]
Costa Alexoglou's avatar
Costa Alexoglou committed
1037
1038
}
```
royjhan's avatar
royjhan committed
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083

## List Running Models
```shell
GET /api/ps
```

List models that are currently loaded into memory.

\* If a model is loaded completely into system memory, `size_vram` is omitted from the response.

#### Examples

### Request
```shell
curl http://localhost:11434/api/ps
```

#### Response

A single JSON object will be returned.

```json
{
  "models": [
    {
      "name": "mistral:latest",
      "model": "mistral:latest",
      "size": 5137025024,
      "digest": "2ae6f6dd7a3dd734790bbbf58b8909a606e0e7e97e94b7604e0aa7ae4490e6d8",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "llama",
        "families": [
          "llama"
        ],
        "parameter_size": "7.2B",
        "quantization_level": "Q4_0"
      },
      "expires_at": "2024-06-04T14:38:31.83753-07:00",
      "size_vram": 5137025024
    }
  ]
}
```