api.md 35.9 KB
Newer Older
1
2
# API

3
4
5
## Endpoints

- [Generate a completion](#generate-a-completion)
6
- [Generate a chat completion](#generate-a-chat-completion)
Matt Williams's avatar
Matt Williams committed
7
8
9
10
11
12
13
14
- [Create a Model](#create-a-model)
- [List Local Models](#list-local-models)
- [Show Model Information](#show-model-information)
- [Copy a Model](#copy-a-model)
- [Delete a Model](#delete-a-model)
- [Pull a Model](#pull-a-model)
- [Push a Model](#push-a-model)
- [Generate Embeddings](#generate-embeddings)
royjhan's avatar
royjhan committed
15
- [List Running Models](#list-running-models)
Matt Williams's avatar
Matt Williams committed
16

17
## Conventions
Matt Williams's avatar
Matt Williams committed
18

19
### Model names
Matt Williams's avatar
Matt Williams committed
20

21
Model names follow a `model:tag` format, where `model` can have an optional namespace such as `example/model`. Some examples are `orca-mini:3b-q4_1` and `llama3:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.
Matt Williams's avatar
Matt Williams committed
22
23
24

### Durations

25
All durations are returned in nanoseconds.
Matt Williams's avatar
Matt Williams committed
26

27
28
### Streaming responses

Jeffrey Morgan's avatar
Jeffrey Morgan committed
29
Certain endpoints stream responses as JSON objects. Streaming can be disabled by providing `{"stream": false}` for these endpoints.
Matt Williams's avatar
Matt Williams committed
30

31
## Generate a completion
Matt Williams's avatar
Matt Williams committed
32

Matt Williams's avatar
Matt Williams committed
33
```shell
34
35
POST /api/generate
```
36

Bruce MacDonald's avatar
Bruce MacDonald committed
37
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
Matt Williams's avatar
Matt Williams committed
38

39
### Parameters
Matt Williams's avatar
Matt Williams committed
40

41
42
- `model`: (required) the [model name](#model-names)
- `prompt`: the prompt to generate a response for
Matt Williams's avatar
Matt Williams committed
43
- `images`: (optional) a list of base64-encoded images (for multimodal models such as `llava`)
Matt Williams's avatar
Matt Williams committed
44

45
Advanced parameters (optional):
Matt Williams's avatar
Matt Williams committed
46

47
- `format`: the format to return a response in. Currently the only accepted value is `json`
48
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
49
- `system`: system message to (overrides what is defined in the `Modelfile`)
50
- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
51
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
52
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
Bruce MacDonald's avatar
Bruce MacDonald committed
53
54
- `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
55

Matt Williams's avatar
Matt Williams committed
56
#### JSON mode
57

Jeffrey Morgan's avatar
Jeffrey Morgan committed
58
Enable JSON mode by setting the `format` parameter to `json`. This will structure the response as a valid JSON object. See the JSON mode [example](#request-json-mode) below.
59
60

> Note: it's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace.
61

62
63
### Examples

Matt Williams's avatar
Matt Williams committed
64
65
66
#### Generate request (Streaming)

##### Request
Matt Williams's avatar
Matt Williams committed
67

Matt Williams's avatar
Matt Williams committed
68
```shell
69
curl http://localhost:11434/api/generate -d '{
70
  "model": "llama3",
71
72
73
  "prompt": "Why is the sky blue?"
}'
```
Matt Williams's avatar
Matt Williams committed
74

Matt Williams's avatar
Matt Williams committed
75
##### Response
Matt Williams's avatar
Matt Williams committed
76

77
A stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
78

79
```json
Matt Williams's avatar
Matt Williams committed
80
{
81
  "model": "llama3",
82
83
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "response": "The",
Matt Williams's avatar
Matt Williams committed
84
85
86
87
  "done": false
}
```

88
The final response in the stream also includes additional data about the generation:
Matt Williams's avatar
Matt Williams committed
89

90
91
92
93
- `total_duration`: time spent generating the response
- `load_duration`: time spent in nanoseconds loading the model
- `prompt_eval_count`: number of tokens in the prompt
- `prompt_eval_duration`: time spent in nanoseconds evaluating the prompt
Sri Siddhaarth's avatar
Sri Siddhaarth committed
94
- `eval_count`: number of tokens in the response
95
- `eval_duration`: time in nanoseconds spent generating the response
96
- `context`: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
97
- `response`: empty if the response was streamed, if not streamed, this will contain the full response
98

Darinka's avatar
Darinka committed
99
To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration` * `10^9`.
100

101
```json
Matt Williams's avatar
Matt Williams committed
102
{
103
  "model": "llama3",
104
  "created_at": "2023-08-04T19:22:45.499127Z",
105
  "response": "",
106
  "done": true,
Matt Williams's avatar
Matt Williams committed
107
  "context": [1, 2, 3],
Jeffrey Morgan's avatar
Jeffrey Morgan committed
108
109
110
111
112
113
  "total_duration": 10706818083,
  "load_duration": 6338219291,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 130079000,
  "eval_count": 259,
  "eval_duration": 4232710000
114
}
Matt Williams's avatar
Matt Williams committed
115
116
```

117
#### Request (No streaming)
118

Matt Williams's avatar
Matt Williams committed
119
120
121
##### Request

A response can be received in one reply when streaming is off.
Bruce MacDonald's avatar
Bruce MacDonald committed
122

123
```shell
124
curl http://localhost:11434/api/generate -d '{
125
  "model": "llama3",
126
127
128
129
130
  "prompt": "Why is the sky blue?",
  "stream": false
}'
```

Matt Williams's avatar
Matt Williams committed
131
##### Response
132

133
134
135
136
If `stream` is set to `false`, the response will be a single JSON object:

```json
{
137
  "model": "llama3",
138
139
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
Matt Williams's avatar
Matt Williams committed
140
  "done": true,
141
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
  "total_duration": 5043500667,
  "load_duration": 5025959,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 325953000,
  "eval_count": 290,
  "eval_duration": 4709213000
}
```

#### Request (JSON mode)

> When `format` is set to `json`, the output will always be a well-formed JSON object. It's important to also instruct the model to respond in JSON.

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
159
  "model": "llama3",
Matt Williams's avatar
Matt Williams committed
160
161
162
163
164
165
166
167
168
169
  "prompt": "What color is the sky at different times of the day? Respond using JSON",
  "format": "json",
  "stream": false
}'
```

##### Response

```json
{
170
  "model": "llama3",
Matt Williams's avatar
Matt Williams committed
171
172
  "created_at": "2023-11-09T21:07:55.186497Z",
  "response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
173
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
174
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
  "total_duration": 4648158584,
  "load_duration": 4071084,
  "prompt_eval_count": 36,
  "prompt_eval_duration": 439038000,
  "eval_count": 180,
  "eval_duration": 4196918000
}
```

The value of `response` will be a string containing JSON similar to:

```json
{
  "morning": {
    "color": "blue"
  },
  "noon": {
    "color": "blue-gray"
  },
  "afternoon": {
    "color": "warm gray"
  },
  "evening": {
    "color": "orange"
  }
200
201
202
}
```

203
204
205
206
#### Request (with images)

To submit images to multimodal models such as `llava` or `bakllava`, provide a list of base64-encoded `images`:

Matt Williams's avatar
Matt Williams committed
207
208
#### Request

209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
```shell
curl http://localhost:11434/api/generate -d '{
  "model": "llava",
  "prompt":"What is in this picture?",
  "stream": false,
  "images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"]
}'
```

#### Response

```
{
  "model": "llava",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": "A happy cartoon character, which is cute and cheerful.",
  "done": true,
Matt Williams's avatar
Matt Williams committed
226
227
228
229
230
231
232
  "context": [1, 2, 3],
  "total_duration": 2938432250,
  "load_duration": 2559292,
  "prompt_eval_count": 1,
  "prompt_eval_duration": 2195557000,
  "eval_count": 44,
  "eval_duration": 736432000
233
234
235
}
```

Bruce MacDonald's avatar
Bruce MacDonald committed
236
#### Request (Raw Mode)
237

Matt Williams's avatar
Matt Williams committed
238
In some cases, you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable templating. Also note that raw mode will not return a context.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
239

Matt Williams's avatar
Matt Williams committed
240
##### Request
241
242

```shell
243
curl http://localhost:11434/api/generate -d '{
244
245
246
247
248
249
250
  "model": "mistral",
  "prompt": "[INST] why is the sky blue? [/INST]",
  "raw": true,
  "stream": false
}'
```

251
252
#### Request (Reproducible outputs)

253
For reproducible outputs, set `seed` to a number:
254
255
256
257
258
259

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
  "model": "mistral",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
260
  "prompt": "Why is the sky blue?",
261
  "options": {
262
    "seed": 123
263
264
265
266
  }
}'
```

Matt Williams's avatar
Matt Williams committed
267
##### Response
268
269
270
271
272
273
274

```json
{
  "model": "mistral",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
  "done": true,
Matt Williams's avatar
Matt Williams committed
275
276
  "total_duration": 8493852375,
  "load_duration": 6589624375,
277
  "prompt_eval_count": 14,
Matt Williams's avatar
Matt Williams committed
278
279
280
  "prompt_eval_duration": 119039000,
  "eval_count": 110,
  "eval_duration": 1779061000
281
282
283
}
```

Matt Williams's avatar
Matt Williams committed
284
#### Generate request (With options)
285
286
287

If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.

Matt Williams's avatar
Matt Williams committed
288
289
##### Request

290
```shell
291
curl http://localhost:11434/api/generate -d '{
292
  "model": "llama3",
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
  "prompt": "Why is the sky blue?",
  "stream": false,
  "options": {
    "num_keep": 5,
    "seed": 42,
    "num_predict": 100,
    "top_k": 20,
    "top_p": 0.9,
    "tfs_z": 0.5,
    "typical_p": 0.7,
    "repeat_last_n": 33,
    "temperature": 0.8,
    "repeat_penalty": 1.2,
    "presence_penalty": 1.5,
    "frequency_penalty": 1.0,
    "mirostat": 1,
    "mirostat_tau": 0.8,
    "mirostat_eta": 0.6,
    "penalize_newline": true,
    "stop": ["\n", "user:"],
    "numa": false,
314
    "num_ctx": 1024,
315
316
317
318
319
320
321
322
    "num_batch": 2,
    "num_gpu": 1,
    "main_gpu": 0,
    "low_vram": false,
    "f16_kv": true,
    "vocab_only": false,
    "use_mmap": true,
    "use_mlock": false,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
323
    "num_thread": 8
324
  }
325
326
327
}'
```

Matt Williams's avatar
Matt Williams committed
328
##### Response
329
330
331

```json
{
332
  "model": "llama3",
333
334
335
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
336
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
  "total_duration": 4935886791,
  "load_duration": 534986708,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 107345000,
  "eval_count": 237,
  "eval_duration": 4289432000
}
```

#### Load a model

If an empty prompt is provided, the model will be loaded into memory.

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
354
  "model": "llama3"
Matt Williams's avatar
Matt Williams committed
355
356
357
358
359
360
361
362
363
}'
```

##### Response

A single JSON object is returned:

```json
{
364
  "model": "llama3",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
365
366
367
  "created_at": "2023-12-18T19:52:07.071755Z",
  "response": "",
  "done": true
368
369
370
}
```

371
## Generate a chat completion
372

Bruce MacDonald's avatar
Bruce MacDonald committed
373
374
375
376
```shell
POST /api/chat
```

Matt Williams's avatar
Matt Williams committed
377
Generate the next message in a chat with a provided model. This is a streaming endpoint, so there will be a series of responses. Streaming can be disabled using `"stream": false`. The final response object will include statistics and additional data from the request.
Bruce MacDonald's avatar
Bruce MacDonald committed
378
379
380
381
382
383

### Parameters

- `model`: (required) the [model name](#model-names)
- `messages`: the messages of the chat, this can be used to keep a chat memory

384
385
386
387
388
The `message` object has the following fields:

- `role`: the role of the message, either `system`, `user` or `assistant`
- `content`: the content of the message
- `images` (optional): a list of images to include in the message (for multimodal models such as `llava`)
389

Bruce MacDonald's avatar
Bruce MacDonald committed
390
391
392
393
394
Advanced parameters (optional):

- `format`: the format to return a response in. Currently the only accepted value is `json`
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
Bruce MacDonald's avatar
Bruce MacDonald committed
395
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
Bruce MacDonald's avatar
Bruce MacDonald committed
396
397
398

### Examples

Matt Williams's avatar
Matt Williams committed
399
400
401
#### Chat Request (Streaming)

##### Request
402

Bruce MacDonald's avatar
Bruce MacDonald committed
403
404
405
Send a chat message with a streaming response.

```shell
406
curl http://localhost:11434/api/chat -d '{
407
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
408
409
410
411
412
413
414
415
416
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    }
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
417
##### Response
Bruce MacDonald's avatar
Bruce MacDonald committed
418
419
420
421
422

A stream of JSON objects is returned:

```json
{
423
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
424
425
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "message": {
Robin Glauser's avatar
Robin Glauser committed
426
    "role": "assistant",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
427
    "content": "The",
Matt Williams's avatar
Matt Williams committed
428
    "images": null
Bruce MacDonald's avatar
Bruce MacDonald committed
429
430
431
432
433
434
435
436
437
  },
  "done": false
}
```

Final response:

```json
{
438
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
439
440
  "created_at": "2023-08-04T19:22:45.499127Z",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
441
442
443
444
445
446
  "total_duration": 4883583458,
  "load_duration": 1334875,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 342546000,
  "eval_count": 282,
  "eval_duration": 4535599000
Bruce MacDonald's avatar
Bruce MacDonald committed
447
448
449
}
```

Matt Williams's avatar
Matt Williams committed
450
#### Chat request (No streaming)
451

Matt Williams's avatar
Matt Williams committed
452
453
454
455
##### Request

```shell
curl http://localhost:11434/api/chat -d '{
456
  "model": "llama3",
Matt Williams's avatar
Matt Williams committed
457
458
459
460
461
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    }
Jeffrey Morgan's avatar
Jeffrey Morgan committed
462
  ],
Matt Williams's avatar
Matt Williams committed
463
464
465
466
467
468
469
470
  "stream": false
}'
```

##### Response

```json
{
471
  "model": "registry.ollama.ai/library/llama3:latest",
Matt Williams's avatar
Matt Williams committed
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
  "created_at": "2023-12-12T14:13:43.416799Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How are you today?"
  },
  "done": true,
  "total_duration": 5191566416,
  "load_duration": 2154458,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 383809000,
  "eval_count": 298,
  "eval_duration": 4799921000
}
```

#### Chat request (With History)

Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.

##### Request
Bruce MacDonald's avatar
Bruce MacDonald committed
492
493

```shell
494
curl http://localhost:11434/api/chat -d '{
495
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    },
    {
      "role": "assistant",
      "content": "due to rayleigh scattering."
    },
    {
      "role": "user",
      "content": "how is that different than mie scattering?"
    }
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
513
##### Response
Bruce MacDonald's avatar
Bruce MacDonald committed
514
515
516
517
518

A stream of JSON objects is returned:

```json
{
519
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
520
521
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "message": {
Robin Glauser's avatar
Robin Glauser committed
522
    "role": "assistant",
Bruce MacDonald's avatar
Bruce MacDonald committed
523
524
525
526
527
528
529
530
531
532
    "content": "The"
  },
  "done": false
}
```

Final response:

```json
{
533
  "model": "llama3",
Bruce MacDonald's avatar
Bruce MacDonald committed
534
535
  "created_at": "2023-08-04T19:22:45.499127Z",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
536
537
538
539
540
541
  "total_duration": 8113331500,
  "load_duration": 6396458,
  "prompt_eval_count": 61,
  "prompt_eval_duration": 398801000,
  "eval_count": 468,
  "eval_duration": 7701267000
Bruce MacDonald's avatar
Bruce MacDonald committed
542
543
544
}
```

Matt Williams's avatar
Matt Williams committed
545
546
547
#### Chat request (with images)

##### Request
548
549
550
551
552

Send a chat message with a conversation history.

```shell
curl http://localhost:11434/api/chat -d '{
Matt Williams's avatar
Matt Williams committed
553
  "model": "llava",
554
555
556
557
558
  "messages": [
    {
      "role": "user",
      "content": "what is in this image?",
      "images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"]
Jeffrey Morgan's avatar
Jeffrey Morgan committed
559
    }
560
561
562
563
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
564
565
566
567
568
569
570
571
572
573
574
575
##### Response

```json
{
  "model": "llava",
  "created_at": "2023-12-13T22:42:50.203334Z",
  "message": {
    "role": "assistant",
    "content": " The image features a cute, little pig with an angry facial expression. It's wearing a heart on its shirt and is waving in the air. This scene appears to be part of a drawing or sketching project.",
    "images": null
  },
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
576
577
578
579
580
581
  "total_duration": 1668506709,
  "load_duration": 1986209,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 359682000,
  "eval_count": 83,
  "eval_duration": 1303285000
Matt Williams's avatar
Matt Williams committed
582
583
584
}
```

585
586
587
588
589
590
#### Chat request (Reproducible outputs)

##### Request

```shell
curl http://localhost:11434/api/chat -d '{
591
  "model": "llama3",
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "options": {
    "seed": 101,
    "temperature": 0
  }
}'
```

##### Response

```json
{
609
  "model": "registry.ollama.ai/library/llama3:latest",
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
  "created_at": "2023-12-12T14:13:43.416799Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How are you today?"
  },
  "done": true,
  "total_duration": 5191566416,
  "load_duration": 2154458,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 383809000,
  "eval_count": 298,
  "eval_duration": 4799921000
}
```

625
## Create a Model
Matt Williams's avatar
Matt Williams committed
626

Matt Williams's avatar
Matt Williams committed
627
```shell
628
629
630
POST /api/create
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
631
Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation must also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using [Create a Blob](#create-a-blob) and the value to the path indicated in the response.
632

633
### Parameters
Matt Williams's avatar
Matt Williams committed
634

635
- `name`: name of the model to create
Jeffrey Morgan's avatar
Jeffrey Morgan committed
636
- `modelfile` (optional): contents of the Modelfile
637
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Jeffrey Morgan's avatar
Jeffrey Morgan committed
638
- `path` (optional): path to the Modelfile
Matt Williams's avatar
Matt Williams committed
639

640
641
### Examples

Matt Williams's avatar
Matt Williams committed
642
643
644
645
646
#### Create a new model

Create a new model from a `Modelfile`.

##### Request
Matt Williams's avatar
Matt Williams committed
647

Matt Williams's avatar
Matt Williams committed
648
```shell
649
curl http://localhost:11434/api/create -d '{
650
  "name": "mario",
651
  "modelfile": "FROM llama3\nSYSTEM You are mario from Super Mario Bros."
652
}'
Matt Williams's avatar
Matt Williams committed
653
654
```

Matt Williams's avatar
Matt Williams committed
655
##### Response
Matt Williams's avatar
Matt Williams committed
656

Matt Williams's avatar
Matt Williams committed
657
A stream of JSON objects. Notice that the final JSON object shows a `"status": "success"`.
Matt Williams's avatar
Matt Williams committed
658

659
```json
Matt Williams's avatar
Matt Williams committed
660
661
662
663
664
665
666
667
668
669
670
{"status":"reading model metadata"}
{"status":"creating system layer"}
{"status":"using already created layer sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2"}
{"status":"using already created layer sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b"}
{"status":"using already created layer sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d"}
{"status":"using already created layer sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988"}
{"status":"using already created layer sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9"}
{"status":"writing layer sha256:df30045fe90f0d750db82a058109cecd6d4de9c90a3d75b19c09e5f64580bb42"}
{"status":"writing layer sha256:f18a68eb09bf925bb1b669490407c1b1251c5db98dc4d3d81f3088498ea55690"}
{"status":"writing manifest"}
{"status":"success"}
Matt Williams's avatar
Matt Williams committed
671
672
```

Michael Yang's avatar
Michael Yang committed
673
674
675
676
677
678
### Check if a Blob Exists

```shell
HEAD /api/blobs/:digest
```

Matt Williams's avatar
Matt Williams committed
679
680
Ensures that the file blob used for a FROM or ADAPTER field exists on the server. This is checking your Ollama server and not Ollama.ai.

Michael Yang's avatar
Michael Yang committed
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
#### Query Parameters

- `digest`: the SHA256 digest of the blob

#### Examples

##### Request

```shell
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
```

##### Response

Return 200 OK if the blob exists, 404 Not Found if it does not.

Michael Yang's avatar
Michael Yang committed
697
### Create a Blob
Michael Yang's avatar
Michael Yang committed
698
699
700
701
702

```shell
POST /api/blobs/:digest
```

Matt Williams's avatar
Matt Williams committed
703
Create a blob from a file on the server. Returns the server file path.
Michael Yang's avatar
Michael Yang committed
704

Michael Yang's avatar
Michael Yang committed
705
#### Query Parameters
Michael Yang's avatar
Michael Yang committed
706
707
708

- `digest`: the expected SHA256 digest of the file

Michael Yang's avatar
Michael Yang committed
709
#### Examples
Michael Yang's avatar
Michael Yang committed
710

Michael Yang's avatar
Michael Yang committed
711
712
##### Request

Michael Yang's avatar
Michael Yang committed
713
```shell
Michael Yang's avatar
Michael Yang committed
714
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
Michael Yang's avatar
Michael Yang committed
715
716
```

Michael Yang's avatar
Michael Yang committed
717
##### Response
Michael Yang's avatar
Michael Yang committed
718

Matt Williams's avatar
Matt Williams committed
719
Return 201 Created if the blob was successfully created, 400 Bad Request if the digest used is not expected.
Michael Yang's avatar
Michael Yang committed
720

721
## List Local Models
Matt Williams's avatar
Matt Williams committed
722

Matt Williams's avatar
Matt Williams committed
723
```shell
724
GET /api/tags
Matt Williams's avatar
Matt Williams committed
725
726
```

727
List models that are available locally.
Matt Williams's avatar
Matt Williams committed
728

729
730
731
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
732

Matt Williams's avatar
Matt Williams committed
733
```shell
734
735
curl http://localhost:11434/api/tags
```
Matt Williams's avatar
Matt Williams committed
736

737
#### Response
Matt Williams's avatar
Matt Williams committed
738

739
740
A single JSON object will be returned.

741
```json
Matt Williams's avatar
Matt Williams committed
742
743
744
{
  "models": [
    {
Matt Williams's avatar
Matt Williams committed
745
746
747
748
749
750
751
752
753
754
755
      "name": "codellama:13b",
      "modified_at": "2023-11-04T14:56:49.277302595-07:00",
      "size": 7365960935,
      "digest": "9f438cb9cd581fc025612d27f7c1a6669ff83a8bb0ed86c94fcf4c5440555697",
      "details": {
        "format": "gguf",
        "family": "llama",
        "families": null,
        "parameter_size": "13B",
        "quantization_level": "Q4_0"
      }
756
757
    },
    {
758
      "name": "llama3:latest",
Matt Williams's avatar
Matt Williams committed
759
760
761
762
763
764
765
766
767
768
      "modified_at": "2023-12-07T09:32:18.757212583-08:00",
      "size": 3825819519,
      "digest": "fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e",
      "details": {
        "format": "gguf",
        "family": "llama",
        "families": null,
        "parameter_size": "7B",
        "quantization_level": "Q4_0"
      }
Matt Williams's avatar
Matt Williams committed
769
770
    }
  ]
Matt Williams's avatar
Matt Williams committed
771
772
773
}
```

Matt Williams's avatar
Matt Williams committed
774
775
776
777
778
779
## Show Model Information

```shell
POST /api/show
```

780
Show information about a model including details, modelfile, template, parameters, license, system prompt.
Matt Williams's avatar
Matt Williams committed
781
782
783
784

### Parameters

- `name`: name of the model to show
785
- `verbose`: (optional) if set to `true`, returns full data for verbose response fields
Matt Williams's avatar
Matt Williams committed
786

787
788
789
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
790

791
```shell
Matt Williams's avatar
Matt Williams committed
792
curl http://localhost:11434/api/show -d '{
793
  "name": "llama3"
Matt Williams's avatar
Matt Williams committed
794
}'
Matt Williams's avatar
Matt Williams committed
795
```
Matt Williams's avatar
Matt Williams committed
796

797
#### Response
Matt Williams's avatar
Matt Williams committed
798
799
800

```json
{
睡觉型学渣's avatar
睡觉型学渣 committed
801
  "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llava:latest\n\nFROM /Users/matt/.ollama/models/blobs/sha256:200765e1283640ffbd013184bf496e261032fa75b99498a9613be4e94d63ad52\nTEMPLATE \"\"\"{{ .System }}\nUSER: {{ .Prompt }}\nASSISTANT: \"\"\"\nPARAMETER num_ctx 4096\nPARAMETER stop \"\u003c/s\u003e\"\nPARAMETER stop \"USER:\"\nPARAMETER stop \"ASSISTANT:\"",
802
803
  "parameters": "num_keep                       24\nstop                           \"<|start_header_id|>\"\nstop                           \"<|end_header_id|>\"\nstop                           \"<|eot_id|>\"",
  "template": "{{ if .System }}<|start_header_id|>system<|end_header_id|>\n\n{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>\n\n{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>\n\n{{ .Response }}<|eot_id|>",
804
  "details": {
805
    "parent_model": "",
806
    "format": "gguf",
Matt Williams's avatar
Matt Williams committed
807
    "family": "llama",
808
809
810
811
    "families": [
      "llama"
    ],
    "parameter_size": "8.0B",
812
    "quantization_level": "Q4_0"
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
  },
  "model_info": {
    "general.architecture": "llama",
    "general.file_type": 2,
    "general.parameter_count": 8030261248,
    "general.quantization_version": 2,
    "llama.attention.head_count": 32,
    "llama.attention.head_count_kv": 8,
    "llama.attention.layer_norm_rms_epsilon": 0.00001,
    "llama.block_count": 32,
    "llama.context_length": 8192,
    "llama.embedding_length": 4096,
    "llama.feed_forward_length": 14336,
    "llama.rope.dimension_count": 128,
    "llama.rope.freq_base": 500000,
    "llama.vocab_size": 128256,
    "tokenizer.ggml.bos_token_id": 128000,
    "tokenizer.ggml.eos_token_id": 128009,
    "tokenizer.ggml.merges": [],            // populates if `verbose=true`
    "tokenizer.ggml.model": "gpt2",
    "tokenizer.ggml.pre": "llama-bpe",
    "tokenizer.ggml.token_type": [],        // populates if `verbose=true`
    "tokenizer.ggml.tokens": []             // populates if `verbose=true`
836
  }
Matt Williams's avatar
Matt Williams committed
837
838
839
840
841
842
}
```

## Copy a Model

```shell
843
POST /api/copy
Matt Williams's avatar
Matt Williams committed
844
```
845

846
Copy a model. Creates a model with another name from an existing model.
Matt Williams's avatar
Matt Williams committed
847

848
849
850
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
851

Matt Williams's avatar
Matt Williams committed
852
```shell
853
curl http://localhost:11434/api/copy -d '{
854
855
  "source": "llama3",
  "destination": "llama3-backup"
Matt Williams's avatar
Matt Williams committed
856
857
858
}'
```

859
#### Response
860

Matt Williams's avatar
Matt Williams committed
861
Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't exist.
862

Matt Williams's avatar
Matt Williams committed
863
## Delete a Model
Matt Williams's avatar
Matt Williams committed
864

Matt Williams's avatar
Matt Williams committed
865
```shell
866
DELETE /api/delete
Matt Williams's avatar
Matt Williams committed
867
868
```

869
Delete a model and its data.
Matt Williams's avatar
Matt Williams committed
870

871
### Parameters
Matt Williams's avatar
Matt Williams committed
872

873
- `name`: model name to delete
Matt Williams's avatar
Matt Williams committed
874

875
876
877
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
878

Matt Williams's avatar
Matt Williams committed
879
```shell
880
curl -X DELETE http://localhost:11434/api/delete -d '{
881
  "name": "llama3:13b"
Matt Williams's avatar
Matt Williams committed
882
883
884
}'
```

885
#### Response
886

Matt Williams's avatar
Matt Williams committed
887
Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't exist.
888

889
## Pull a Model
Matt Williams's avatar
Matt Williams committed
890

Matt Williams's avatar
Matt Williams committed
891
```shell
892
893
894
POST /api/pull
```

Matt Williams's avatar
Matt Williams committed
895
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
Matt Williams's avatar
Matt Williams committed
896

897
### Parameters
Matt Williams's avatar
Matt Williams committed
898

899
- `name`: name of the model to pull
Matt Williams's avatar
Matt Williams committed
900
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
901
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
902

903
904
905
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
906

Matt Williams's avatar
Matt Williams committed
907
```shell
908
curl http://localhost:11434/api/pull -d '{
909
  "name": "llama3"
910
}'
Matt Williams's avatar
Matt Williams committed
911
912
```

913
#### Response
914

915
916
917
918
919
920
921
922
923
924
925
926
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:

The first object is the manifest:

```json
{
  "status": "pulling manifest"
}
```

Then there is a series of downloading responses. Until any of the download is completed, the `completed` key may not be included. The number of files to be downloaded depends on the number of layers specified in the manifest.

927
```json
Matt Williams's avatar
Matt Williams committed
928
{
929
930
  "status": "downloading digestname",
  "digest": "digestname",
931
  "total": 2142590208,
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
  "completed": 241970
}
```

After all the files are downloaded, the final responses are:

```json
{
    "status": "verifying sha256 digest"
}
{
    "status": "writing manifest"
}
{
    "status": "removing any unused layers"
}
{
    "status": "success"
}
```

if `stream` is set to false, then the response is a single JSON object:

```json
{
  "status": "success"
Matt Williams's avatar
Matt Williams committed
958
959
}
```
960

Matt Williams's avatar
Matt Williams committed
961
962
963
964
965
966
967
968
969
970
971
## Push a Model

```shell
POST /api/push
```

Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.

### Parameters

- `name`: name of the model to push in the form of `<namespace>/<model>:<tag>`
972
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
973
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
974

975
976
977
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
978
979

```shell
980
curl http://localhost:11434/api/push -d '{
Matt Williams's avatar
Matt Williams committed
981
982
983
984
  "name": "mattw/pygmalion:latest"
}'
```

985
#### Response
986

987
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
988
989

```json
990
{ "status": "retrieving manifest" }
991
```
Matt Williams's avatar
Matt Williams committed
992
993
994
995
996

and then:

```json
{
997
998
999
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
Matt Williams's avatar
Matt Williams committed
1000
1001
1002
1003
1004
1005
1006
}
```

Then there is a series of uploading responses:

```json
{
1007
1008
1009
1010
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
}
Matt Williams's avatar
Matt Williams committed
1011
1012
1013
1014
1015
1016
1017
1018
1019
```

Finally, when the upload is complete:

```json
{"status":"pushing manifest"}
{"status":"success"}
```

1020
1021
1022
If `stream` is set to `false`, then the response is a single JSON object:

```json
1023
{ "status": "success" }
1024
1025
```

Matt Williams's avatar
Matt Williams committed
1026
1027
1028
## Generate Embeddings

```shell
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
POST /api/embeddings
```

Generate embeddings from a model

### Parameters

- `model`: name of model to generate embeddings from
- `prompt`: text to generate embeddings for

1039
1040
1041
Advanced parameters:

- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
Bruce MacDonald's avatar
Bruce MacDonald committed
1042
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
1043

1044
1045
1046
### Examples

#### Request
1047

Matt Williams's avatar
Matt Williams committed
1048
```shell
1049
curl http://localhost:11434/api/embeddings -d '{
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1050
  "model": "all-minilm",
1051
1052
1053
1054
  "prompt": "Here is an article about llamas..."
}'
```

1055
#### Response
1056
1057
1058

```json
{
Alexander F. Rødseth's avatar
Alexander F. Rødseth committed
1059
  "embedding": [
1060
1061
1062
    0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
    0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
  ]
Costa Alexoglou's avatar
Costa Alexoglou committed
1063
1064
}
```
royjhan's avatar
royjhan committed
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075

## List Running Models
```shell
GET /api/ps
```

List models that are currently loaded into memory.

#### Examples

### Request
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1076

royjhan's avatar
royjhan committed
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
```shell
curl http://localhost:11434/api/ps
```

#### Response

A single JSON object will be returned.

```json
{
  "models": [
    {
      "name": "mistral:latest",
      "model": "mistral:latest",
      "size": 5137025024,
      "digest": "2ae6f6dd7a3dd734790bbbf58b8909a606e0e7e97e94b7604e0aa7ae4490e6d8",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "llama",
        "families": [
          "llama"
        ],
        "parameter_size": "7.2B",
        "quantization_level": "Q4_0"
      },
      "expires_at": "2024-06-04T14:38:31.83753-07:00",
      "size_vram": 5137025024
    }
  ]
}
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1108
```