api.md 45 KB
Newer Older
1
2
# API

3
4
5
## Endpoints

- [Generate a completion](#generate-a-completion)
6
- [Generate a chat completion](#generate-a-chat-completion)
Matt Williams's avatar
Matt Williams committed
7
8
9
10
11
12
13
14
- [Create a Model](#create-a-model)
- [List Local Models](#list-local-models)
- [Show Model Information](#show-model-information)
- [Copy a Model](#copy-a-model)
- [Delete a Model](#delete-a-model)
- [Pull a Model](#pull-a-model)
- [Push a Model](#push-a-model)
- [Generate Embeddings](#generate-embeddings)
royjhan's avatar
royjhan committed
15
- [List Running Models](#list-running-models)
Matt Williams's avatar
Matt Williams committed
16

17
## Conventions
Matt Williams's avatar
Matt Williams committed
18

19
### Model names
Matt Williams's avatar
Matt Williams committed
20

21
Model names follow a `model:tag` format, where `model` can have an optional namespace such as `example/model`. Some examples are `orca-mini:3b-q4_1` and `llama3:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.
Matt Williams's avatar
Matt Williams committed
22
23
24

### Durations

25
All durations are returned in nanoseconds.
Matt Williams's avatar
Matt Williams committed
26

27
28
### Streaming responses

Jeffrey Morgan's avatar
Jeffrey Morgan committed
29
Certain endpoints stream responses as JSON objects. Streaming can be disabled by providing `{"stream": false}` for these endpoints.
Matt Williams's avatar
Matt Williams committed
30

31
## Generate a completion
Matt Williams's avatar
Matt Williams committed
32

Matt Williams's avatar
Matt Williams committed
33
```shell
34
35
POST /api/generate
```
36

Bruce MacDonald's avatar
Bruce MacDonald committed
37
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
Matt Williams's avatar
Matt Williams committed
38

39
### Parameters
Matt Williams's avatar
Matt Williams committed
40

41
42
- `model`: (required) the [model name](#model-names)
- `prompt`: the prompt to generate a response for
Michael Yang's avatar
Michael Yang committed
43
- `suffix`: the text after the model response
Matt Williams's avatar
Matt Williams committed
44
- `images`: (optional) a list of base64-encoded images (for multimodal models such as `llava`)
Matt Williams's avatar
Matt Williams committed
45

46
Advanced parameters (optional):
Matt Williams's avatar
Matt Williams committed
47

48
- `format`: the format to return a response in. Format can be `json` or a JSON schema
49
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
50
- `system`: system message to (overrides what is defined in the `Modelfile`)
51
- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
52
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
Bruce MacDonald's avatar
Bruce MacDonald committed
53
54
- `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
55
- `context` (deprecated): the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
56

57
58
59
60
#### Structured outputs

Structured outputs are supported by providing a JSON schema in the `format` parameter. The model will generate a response that matches the schema. See the [structured outputs](#request-structured-outputs) example below.

Matt Williams's avatar
Matt Williams committed
61
#### JSON mode
62

Jeffrey Morgan's avatar
Jeffrey Morgan committed
63
Enable JSON mode by setting the `format` parameter to `json`. This will structure the response as a valid JSON object. See the JSON mode [example](#request-json-mode) below.
64

Michael Yang's avatar
Michael Yang committed
65
66
> [!IMPORTANT]
> It's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace.
67

68
69
### Examples

Matt Williams's avatar
Matt Williams committed
70
71
72
#### Generate request (Streaming)

##### Request
Matt Williams's avatar
Matt Williams committed
73

Matt Williams's avatar
Matt Williams committed
74
```shell
75
curl http://localhost:11434/api/generate -d '{
76
  "model": "llama3.2",
77
78
79
  "prompt": "Why is the sky blue?"
}'
```
Matt Williams's avatar
Matt Williams committed
80

Matt Williams's avatar
Matt Williams committed
81
##### Response
Matt Williams's avatar
Matt Williams committed
82

83
A stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
84

85
```json
Matt Williams's avatar
Matt Williams committed
86
{
87
  "model": "llama3.2",
88
89
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "response": "The",
Matt Williams's avatar
Matt Williams committed
90
91
92
93
  "done": false
}
```

94
The final response in the stream also includes additional data about the generation:
Matt Williams's avatar
Matt Williams committed
95

96
97
98
99
- `total_duration`: time spent generating the response
- `load_duration`: time spent in nanoseconds loading the model
- `prompt_eval_count`: number of tokens in the prompt
- `prompt_eval_duration`: time spent in nanoseconds evaluating the prompt
Sri Siddhaarth's avatar
Sri Siddhaarth committed
100
- `eval_count`: number of tokens in the response
101
- `eval_duration`: time in nanoseconds spent generating the response
102
- `context`: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
103
- `response`: empty if the response was streamed, if not streamed, this will contain the full response
104

Darinka's avatar
Darinka committed
105
To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration` * `10^9`.
106

107
```json
Matt Williams's avatar
Matt Williams committed
108
{
109
  "model": "llama3.2",
110
  "created_at": "2023-08-04T19:22:45.499127Z",
111
  "response": "",
112
  "done": true,
Matt Williams's avatar
Matt Williams committed
113
  "context": [1, 2, 3],
Jeffrey Morgan's avatar
Jeffrey Morgan committed
114
115
116
117
118
119
  "total_duration": 10706818083,
  "load_duration": 6338219291,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 130079000,
  "eval_count": 259,
  "eval_duration": 4232710000
120
}
Matt Williams's avatar
Matt Williams committed
121
122
```

123
#### Request (No streaming)
124

Matt Williams's avatar
Matt Williams committed
125
126
127
##### Request

A response can be received in one reply when streaming is off.
Bruce MacDonald's avatar
Bruce MacDonald committed
128

129
```shell
130
curl http://localhost:11434/api/generate -d '{
131
  "model": "llama3.2",
132
133
134
135
136
  "prompt": "Why is the sky blue?",
  "stream": false
}'
```

Matt Williams's avatar
Matt Williams committed
137
##### Response
138

139
140
141
142
If `stream` is set to `false`, the response will be a single JSON object:

```json
{
143
  "model": "llama3.2",
144
145
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
Matt Williams's avatar
Matt Williams committed
146
  "done": true,
147
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
148
149
150
151
152
153
154
155
156
  "total_duration": 5043500667,
  "load_duration": 5025959,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 325953000,
  "eval_count": 290,
  "eval_duration": 4709213000
}
```

Michael Yang's avatar
Michael Yang committed
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
#### Request (with suffix)

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
  "model": "codellama:code",
  "prompt": "def compute_gcd(a, b):",
  "suffix": "    return result",
  "options": {
    "temperature": 0
  },
  "stream": false
}'
```

##### Response

```json
{
  "model": "codellama:code",
  "created_at": "2024-07-22T20:47:51.147561Z",
  "response": "\n  if a == 0:\n    return b\n  else:\n    return compute_gcd(b % a, a)\n\ndef compute_lcm(a, b):\n  result = (a * b) / compute_gcd(a, b)\n",
  "done": true,
  "done_reason": "stop",
  "context": [...],
  "total_duration": 1162761250,
  "load_duration": 6683708,
  "prompt_eval_count": 17,
  "prompt_eval_duration": 201222000,
  "eval_count": 63,
  "eval_duration": 953997000
}
```

192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
#### Request (Structured outputs)

##### Request

```shell
curl -X POST http://localhost:11434/api/generate -H "Content-Type: application/json" -d '{
  "model": "llama3.1:8b",
  "prompt": "Ollama is 22 years old and is busy saving the world. Respond using JSON",
  "stream": false,
  "format": {
    "type": "object",
    "properties": {
      "age": {
        "type": "integer"
      },
      "available": {
        "type": "boolean"
      }
    },
    "required": [
      "age",
      "available"
    ]
  }
}'
```

##### Response

```json
{
  "model": "llama3.1:8b",
  "created_at": "2024-12-06T00:48:09.983619Z",
  "response": "{\n  \"age\": 22,\n  \"available\": true\n}",
  "done": true,
  "done_reason": "stop",
  "context": [1, 2, 3],
  "total_duration": 1075509083,
  "load_duration": 567678166,
  "prompt_eval_count": 28,
  "prompt_eval_duration": 236000000,
  "eval_count": 16,
  "eval_duration": 269000000
}
```

Matt Williams's avatar
Matt Williams committed
238
239
#### Request (JSON mode)

Michael Yang's avatar
Michael Yang committed
240
> [!IMPORTANT]
Matt Williams's avatar
Matt Williams committed
241
242
243
244
245
246
> When `format` is set to `json`, the output will always be a well-formed JSON object. It's important to also instruct the model to respond in JSON.

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
247
  "model": "llama3.2",
Matt Williams's avatar
Matt Williams committed
248
249
250
251
252
253
254
255
256
257
  "prompt": "What color is the sky at different times of the day? Respond using JSON",
  "format": "json",
  "stream": false
}'
```

##### Response

```json
{
258
  "model": "llama3.2",
Matt Williams's avatar
Matt Williams committed
259
260
  "created_at": "2023-11-09T21:07:55.186497Z",
  "response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
261
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
262
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
  "total_duration": 4648158584,
  "load_duration": 4071084,
  "prompt_eval_count": 36,
  "prompt_eval_duration": 439038000,
  "eval_count": 180,
  "eval_duration": 4196918000
}
```

The value of `response` will be a string containing JSON similar to:

```json
{
  "morning": {
    "color": "blue"
  },
  "noon": {
    "color": "blue-gray"
  },
  "afternoon": {
    "color": "warm gray"
  },
  "evening": {
    "color": "orange"
  }
288
289
290
}
```

291
292
293
294
#### Request (with images)

To submit images to multimodal models such as `llava` or `bakllava`, provide a list of base64-encoded `images`:

Matt Williams's avatar
Matt Williams committed
295
296
#### Request

297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
```shell
curl http://localhost:11434/api/generate -d '{
  "model": "llava",
  "prompt":"What is in this picture?",
  "stream": false,
  "images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"]
}'
```

#### Response

```
{
  "model": "llava",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": "A happy cartoon character, which is cute and cheerful.",
  "done": true,
Matt Williams's avatar
Matt Williams committed
314
315
316
317
318
319
320
  "context": [1, 2, 3],
  "total_duration": 2938432250,
  "load_duration": 2559292,
  "prompt_eval_count": 1,
  "prompt_eval_duration": 2195557000,
  "eval_count": 44,
  "eval_duration": 736432000
321
322
323
}
```

Bruce MacDonald's avatar
Bruce MacDonald committed
324
#### Request (Raw Mode)
325

Matt Williams's avatar
Matt Williams committed
326
In some cases, you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable templating. Also note that raw mode will not return a context.
Jeffrey Morgan's avatar
Jeffrey Morgan committed
327

Matt Williams's avatar
Matt Williams committed
328
##### Request
329
330

```shell
331
curl http://localhost:11434/api/generate -d '{
332
333
334
335
336
337
338
  "model": "mistral",
  "prompt": "[INST] why is the sky blue? [/INST]",
  "raw": true,
  "stream": false
}'
```

339
340
#### Request (Reproducible outputs)

341
For reproducible outputs, set `seed` to a number:
342
343
344
345
346
347

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
  "model": "mistral",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
348
  "prompt": "Why is the sky blue?",
349
  "options": {
350
    "seed": 123
351
352
353
354
  }
}'
```

Matt Williams's avatar
Matt Williams committed
355
##### Response
356
357
358
359
360
361
362

```json
{
  "model": "mistral",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
  "done": true,
Matt Williams's avatar
Matt Williams committed
363
364
  "total_duration": 8493852375,
  "load_duration": 6589624375,
365
  "prompt_eval_count": 14,
Matt Williams's avatar
Matt Williams committed
366
367
368
  "prompt_eval_duration": 119039000,
  "eval_count": 110,
  "eval_duration": 1779061000
369
370
371
}
```

Matt Williams's avatar
Matt Williams committed
372
#### Generate request (With options)
373
374
375

If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.

Matt Williams's avatar
Matt Williams committed
376
377
##### Request

378
```shell
379
curl http://localhost:11434/api/generate -d '{
380
  "model": "llama3.2",
381
382
383
384
385
386
387
388
  "prompt": "Why is the sky blue?",
  "stream": false,
  "options": {
    "num_keep": 5,
    "seed": 42,
    "num_predict": 100,
    "top_k": 20,
    "top_p": 0.9,
389
    "min_p": 0.0,
390
391
392
393
394
395
396
397
398
399
400
401
    "typical_p": 0.7,
    "repeat_last_n": 33,
    "temperature": 0.8,
    "repeat_penalty": 1.2,
    "presence_penalty": 1.5,
    "frequency_penalty": 1.0,
    "mirostat": 1,
    "mirostat_tau": 0.8,
    "mirostat_eta": 0.6,
    "penalize_newline": true,
    "stop": ["\n", "user:"],
    "numa": false,
402
    "num_ctx": 1024,
403
404
405
406
407
408
409
    "num_batch": 2,
    "num_gpu": 1,
    "main_gpu": 0,
    "low_vram": false,
    "vocab_only": false,
    "use_mmap": true,
    "use_mlock": false,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
410
    "num_thread": 8
411
  }
412
413
414
}'
```

Matt Williams's avatar
Matt Williams committed
415
##### Response
416
417
418

```json
{
419
  "model": "llama3.2",
420
421
422
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
423
  "context": [1, 2, 3],
Matt Williams's avatar
Matt Williams committed
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
  "total_duration": 4935886791,
  "load_duration": 534986708,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 107345000,
  "eval_count": 237,
  "eval_duration": 4289432000
}
```

#### Load a model

If an empty prompt is provided, the model will be loaded into memory.

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
441
  "model": "llama3.2"
Matt Williams's avatar
Matt Williams committed
442
443
444
445
446
447
448
449
450
}'
```

##### Response

A single JSON object is returned:

```json
{
451
  "model": "llama3.2",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
452
453
454
  "created_at": "2023-12-18T19:52:07.071755Z",
  "response": "",
  "done": true
455
456
457
}
```

458
459
460
461
462
463
464
465
#### Unload a model

If an empty prompt is provided and the `keep_alive` parameter is set to `0`, a model will be unloaded from memory.

##### Request

```shell
curl http://localhost:11434/api/generate -d '{
466
  "model": "llama3.2",
467
468
469
470
471
472
473
474
475
476
  "keep_alive": 0
}'
```

##### Response

A single JSON object is returned:

```json
{
477
  "model": "llama3.2",
478
479
480
481
482
483
484
  "created_at": "2024-09-12T03:54:03.516566Z",
  "response": "",
  "done": true,
  "done_reason": "unload"
}
```

485
## Generate a chat completion
486

Bruce MacDonald's avatar
Bruce MacDonald committed
487
488
489
490
```shell
POST /api/chat
```

Matt Williams's avatar
Matt Williams committed
491
Generate the next message in a chat with a provided model. This is a streaming endpoint, so there will be a series of responses. Streaming can be disabled using `"stream": false`. The final response object will include statistics and additional data from the request.
Bruce MacDonald's avatar
Bruce MacDonald committed
492
493
494
495
496

### Parameters

- `model`: (required) the [model name](#model-names)
- `messages`: the messages of the chat, this can be used to keep a chat memory
Jeffrey Morgan's avatar
Jeffrey Morgan committed
497
- `tools`: tools for the model to use if supported. Requires `stream` to be set to `false`
Bruce MacDonald's avatar
Bruce MacDonald committed
498

499
500
The `message` object has the following fields:

Michael Yang's avatar
Michael Yang committed
501
- `role`: the role of the message, either `system`, `user`, `assistant`, or `tool`
502
503
- `content`: the content of the message
- `images` (optional): a list of images to include in the message (for multimodal models such as `llava`)
Michael Yang's avatar
Michael Yang committed
504
- `tool_calls` (optional): a list of tools the model wants to use
505

Bruce MacDonald's avatar
Bruce MacDonald committed
506
507
Advanced parameters (optional):

508
- `format`: the format to return a response in. Format can be `json` or a JSON schema. 
Bruce MacDonald's avatar
Bruce MacDonald committed
509
510
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
Bruce MacDonald's avatar
Bruce MacDonald committed
511
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
Bruce MacDonald's avatar
Bruce MacDonald committed
512

513
514
515
516
### Structured outputs

Structured outputs are supported by providing a JSON schema in the `format` parameter. The model will generate a response that matches the schema. See the [Chat request (Structured outputs)](#chat-request-structured-outputs) example below.

Bruce MacDonald's avatar
Bruce MacDonald committed
517
518
### Examples

Matt Williams's avatar
Matt Williams committed
519
520
521
#### Chat Request (Streaming)

##### Request
522

Bruce MacDonald's avatar
Bruce MacDonald committed
523
524
525
Send a chat message with a streaming response.

```shell
526
curl http://localhost:11434/api/chat -d '{
527
  "model": "llama3.2",
Bruce MacDonald's avatar
Bruce MacDonald committed
528
529
530
531
532
533
534
535
536
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    }
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
537
##### Response
Bruce MacDonald's avatar
Bruce MacDonald committed
538
539
540
541
542

A stream of JSON objects is returned:

```json
{
543
  "model": "llama3.2",
Bruce MacDonald's avatar
Bruce MacDonald committed
544
545
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "message": {
Robin Glauser's avatar
Robin Glauser committed
546
    "role": "assistant",
Jeffrey Morgan's avatar
Jeffrey Morgan committed
547
    "content": "The",
Matt Williams's avatar
Matt Williams committed
548
    "images": null
Bruce MacDonald's avatar
Bruce MacDonald committed
549
550
551
552
553
554
555
556
557
  },
  "done": false
}
```

Final response:

```json
{
558
  "model": "llama3.2",
Bruce MacDonald's avatar
Bruce MacDonald committed
559
560
  "created_at": "2023-08-04T19:22:45.499127Z",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
561
562
563
564
565
566
  "total_duration": 4883583458,
  "load_duration": 1334875,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 342546000,
  "eval_count": 282,
  "eval_duration": 4535599000
Bruce MacDonald's avatar
Bruce MacDonald committed
567
568
569
}
```

Matt Williams's avatar
Matt Williams committed
570
#### Chat request (No streaming)
571

Matt Williams's avatar
Matt Williams committed
572
573
574
575
##### Request

```shell
curl http://localhost:11434/api/chat -d '{
576
  "model": "llama3.2",
Matt Williams's avatar
Matt Williams committed
577
578
579
580
581
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    }
Jeffrey Morgan's avatar
Jeffrey Morgan committed
582
  ],
Matt Williams's avatar
Matt Williams committed
583
584
585
586
587
588
589
590
  "stream": false
}'
```

##### Response

```json
{
591
  "model": "llama3.2",
Matt Williams's avatar
Matt Williams committed
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
  "created_at": "2023-12-12T14:13:43.416799Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How are you today?"
  },
  "done": true,
  "total_duration": 5191566416,
  "load_duration": 2154458,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 383809000,
  "eval_count": 298,
  "eval_duration": 4799921000
}
```

607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
#### Chat request (Structured outputs)

##### Request

```shell
curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
  "model": "llama3.1",
  "messages": [{"role": "user", "content": "Ollama is 22 years old and busy saving the world. Return a JSON object with the age and availability."}],
  "stream": false,
  "format": {
    "type": "object",
    "properties": {
      "age": {
        "type": "integer"
      },
      "available": {
        "type": "boolean"
      }
    },
    "required": [
      "age",
      "available"
    ]
  },
  "options": {
    "temperature": 0
  }
}'
```

##### Response

```json
{
  "model": "llama3.1",
  "created_at": "2024-12-06T00:46:58.265747Z",
  "message": { "role": "assistant", "content": "{\"age\": 22, \"available\": false}" },
  "done_reason": "stop",
  "done": true,
  "total_duration": 2254970291,
  "load_duration": 574751416,
  "prompt_eval_count": 34,
  "prompt_eval_duration": 1502000000,
  "eval_count": 12,
  "eval_duration": 175000000
}
```

Matt Williams's avatar
Matt Williams committed
655
656
657
658
659
#### Chat request (With History)

Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.

##### Request
Bruce MacDonald's avatar
Bruce MacDonald committed
660
661

```shell
662
curl http://localhost:11434/api/chat -d '{
663
  "model": "llama3.2",
Bruce MacDonald's avatar
Bruce MacDonald committed
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
  "messages": [
    {
      "role": "user",
      "content": "why is the sky blue?"
    },
    {
      "role": "assistant",
      "content": "due to rayleigh scattering."
    },
    {
      "role": "user",
      "content": "how is that different than mie scattering?"
    }
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
681
##### Response
Bruce MacDonald's avatar
Bruce MacDonald committed
682
683
684
685
686

A stream of JSON objects is returned:

```json
{
687
  "model": "llama3.2",
Bruce MacDonald's avatar
Bruce MacDonald committed
688
689
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "message": {
Robin Glauser's avatar
Robin Glauser committed
690
    "role": "assistant",
Bruce MacDonald's avatar
Bruce MacDonald committed
691
692
693
694
695
696
697
698
699
700
    "content": "The"
  },
  "done": false
}
```

Final response:

```json
{
701
  "model": "llama3.2",
Bruce MacDonald's avatar
Bruce MacDonald committed
702
703
  "created_at": "2023-08-04T19:22:45.499127Z",
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
704
705
706
707
708
709
  "total_duration": 8113331500,
  "load_duration": 6396458,
  "prompt_eval_count": 61,
  "prompt_eval_duration": 398801000,
  "eval_count": 468,
  "eval_duration": 7701267000
Bruce MacDonald's avatar
Bruce MacDonald committed
710
711
712
}
```

Matt Williams's avatar
Matt Williams committed
713
714
715
#### Chat request (with images)

##### Request
716

Veit Heller's avatar
Veit Heller committed
717
Send a chat message with images. The images should be provided as an array, with the individual images encoded in Base64.
718
719
720

```shell
curl http://localhost:11434/api/chat -d '{
Matt Williams's avatar
Matt Williams committed
721
  "model": "llava",
722
723
724
725
726
  "messages": [
    {
      "role": "user",
      "content": "what is in this image?",
      "images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC"]
Jeffrey Morgan's avatar
Jeffrey Morgan committed
727
    }
728
729
730
731
  ]
}'
```

Matt Williams's avatar
Matt Williams committed
732
733
734
735
736
737
738
739
740
741
742
743
##### Response

```json
{
  "model": "llava",
  "created_at": "2023-12-13T22:42:50.203334Z",
  "message": {
    "role": "assistant",
    "content": " The image features a cute, little pig with an angry facial expression. It's wearing a heart on its shirt and is waving in the air. This scene appears to be part of a drawing or sketching project.",
    "images": null
  },
  "done": true,
Jeffrey Morgan's avatar
Jeffrey Morgan committed
744
745
746
747
748
749
  "total_duration": 1668506709,
  "load_duration": 1986209,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 359682000,
  "eval_count": 83,
  "eval_duration": 1303285000
Matt Williams's avatar
Matt Williams committed
750
751
752
}
```

753
754
755
756
757
758
#### Chat request (Reproducible outputs)

##### Request

```shell
curl http://localhost:11434/api/chat -d '{
759
  "model": "llama3.2",
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "options": {
    "seed": 101,
    "temperature": 0
  }
}'
```

##### Response

```json
{
777
  "model": "llama3.2",
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
  "created_at": "2023-12-12T14:13:43.416799Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How are you today?"
  },
  "done": true,
  "total_duration": 5191566416,
  "load_duration": 2154458,
  "prompt_eval_count": 26,
  "prompt_eval_duration": 383809000,
  "eval_count": 298,
  "eval_duration": 4799921000
}
```

Michael Yang's avatar
Michael Yang committed
793
794
795
796
797
798
#### Chat request (with tools)

##### Request

```
curl http://localhost:11434/api/chat -d '{
799
  "model": "llama3.2",
Michael Yang's avatar
Michael Yang committed
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
  "messages": [
    {
      "role": "user",
      "content": "What is the weather today in Paris?"
    }
  ],
  "stream": false,
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather for a location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The location to get the weather for, e.g. San Francisco, CA"
            },
            "format": {
              "type": "string",
              "description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
              "enum": ["celsius", "fahrenheit"]
            }
          },
          "required": ["location", "format"]
        }
      }
    }
  ]
}'
```

##### Response

```json
{
838
  "model": "llama3.2",
Michael Yang's avatar
Michael Yang committed
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
  "created_at": "2024-07-22T20:33:28.123648Z",
  "message": {
    "role": "assistant",
    "content": "",
    "tool_calls": [
      {
        "function": {
          "name": "get_current_weather",
          "arguments": {
            "format": "celsius",
            "location": "Paris, FR"
          }
        }
      }
    ]
  },
  "done_reason": "stop",
  "done": true,
  "total_duration": 885095291,
  "load_duration": 3753500,
  "prompt_eval_count": 122,
  "prompt_eval_duration": 328493000,
  "eval_count": 33,
  "eval_duration": 552222000
}
```

866
867
868
869
870
871
872
873
#### Load a model

If the messages array is empty, the model will be loaded into memory.

##### Request

```
curl http://localhost:11434/api/chat -d '{
874
  "model": "llama3.2",
875
876
877
878
879
880
881
  "messages": []
}'
```

##### Response
```json
{
882
  "model": "llama3.2",
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
  "created_at":"2024-09-12T21:17:29.110811Z",
  "message": {
    "role": "assistant",
    "content": ""
  },
  "done_reason": "load",
  "done": true
}
```

#### Unload a model

If the messages array is empty and the `keep_alive` parameter is set to `0`, a model will be unloaded from memory.

##### Request

```
curl http://localhost:11434/api/chat -d '{
901
  "model": "llama3.2",
902
903
904
905
906
907
908
909
910
911
912
  "messages": [],
  "keep_alive": 0
}'
```

##### Response

A single JSON object is returned:

```json
{
913
  "model": "llama3.2",
914
915
916
917
918
919
920
921
922
923
  "created_at":"2024-09-12T21:33:17.547535Z",
  "message": {
    "role": "assistant",
    "content": ""
  },
  "done_reason": "unload",
  "done": true
}
```

924
## Create a Model
Matt Williams's avatar
Matt Williams committed
925

Matt Williams's avatar
Matt Williams committed
926
```shell
927
928
929
POST /api/create
```

Jeffrey Morgan's avatar
Jeffrey Morgan committed
930
Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation must also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using [Create a Blob](#create-a-blob) and the value to the path indicated in the response.
931

932
### Parameters
Matt Williams's avatar
Matt Williams committed
933

Patrick Devine's avatar
Patrick Devine committed
934
- `model`: name of the model to create
Jeffrey Morgan's avatar
Jeffrey Morgan committed
935
- `modelfile` (optional): contents of the Modelfile
936
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Jeffrey Morgan's avatar
Jeffrey Morgan committed
937
- `path` (optional): path to the Modelfile
Patrick Devine's avatar
Patrick Devine committed
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
- `quantize` (optional): quantize a non-quantized (e.g. float16) model

#### Quantization types

| Type | Recommended |
| --- | :-: |
| q2_K | |
| q3_K_L | |
| q3_K_M | |
| q3_K_S | |
| q4_0 | |
| q4_1 | |
| q4_K_M | * |
| q4_K_S | |
| q5_0 | |
| q5_1 | |
| q5_K_M | |
| q5_K_S | |
| q6_K | |
| q8_0 | * |
Matt Williams's avatar
Matt Williams committed
958

959
960
### Examples

Matt Williams's avatar
Matt Williams committed
961
962
963
964
965
#### Create a new model

Create a new model from a `Modelfile`.

##### Request
Matt Williams's avatar
Matt Williams committed
966

Matt Williams's avatar
Matt Williams committed
967
```shell
968
curl http://localhost:11434/api/create -d '{
Patrick Devine's avatar
Patrick Devine committed
969
  "model": "mario",
970
  "modelfile": "FROM llama3\nSYSTEM You are mario from Super Mario Bros."
971
}'
Matt Williams's avatar
Matt Williams committed
972
973
```

Matt Williams's avatar
Matt Williams committed
974
##### Response
Matt Williams's avatar
Matt Williams committed
975

Patrick Devine's avatar
Patrick Devine committed
976
A stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
977

978
```json
Matt Williams's avatar
Matt Williams committed
979
980
981
982
983
984
985
986
987
988
989
{"status":"reading model metadata"}
{"status":"creating system layer"}
{"status":"using already created layer sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2"}
{"status":"using already created layer sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b"}
{"status":"using already created layer sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d"}
{"status":"using already created layer sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988"}
{"status":"using already created layer sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9"}
{"status":"writing layer sha256:df30045fe90f0d750db82a058109cecd6d4de9c90a3d75b19c09e5f64580bb42"}
{"status":"writing layer sha256:f18a68eb09bf925bb1b669490407c1b1251c5db98dc4d3d81f3088498ea55690"}
{"status":"writing manifest"}
{"status":"success"}
Matt Williams's avatar
Matt Williams committed
990
991
```

Patrick Devine's avatar
Patrick Devine committed
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
#### Quantize a model

Quantize a non-quantized model.

##### Request

```shell
curl http://localhost:11434/api/create -d '{
  "model": "llama3.1:quantized",
  "modelfile": "FROM llama3.1:8b-instruct-fp16",
  "quantize": "q4_K_M"
}'
```

##### Response

A stream of JSON objects is returned:

```
{"status":"quantizing F16 model to Q4_K_M"}
{"status":"creating new layer sha256:667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29"}
{"status":"using existing layer sha256:11ce4ee3e170f6adebac9a991c22e22ab3f8530e154ee669954c4bc73061c258"}
{"status":"using existing layer sha256:0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177"}
{"status":"using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb"}
{"status":"creating new layer sha256:455f34728c9b5dd3376378bfb809ee166c145b0b4c1f1a6feca069055066ef9a"}
{"status":"writing manifest"}
{"status":"success"}
```


Michael Yang's avatar
Michael Yang committed
1022
1023
1024
1025
1026
1027
### Check if a Blob Exists

```shell
HEAD /api/blobs/:digest
```

Patrick Devine's avatar
Patrick Devine committed
1028
Ensures that the file blob used for a FROM or ADAPTER field exists on the server. This is checking your Ollama server and not ollama.com.
Matt Williams's avatar
Matt Williams committed
1029

Michael Yang's avatar
Michael Yang committed
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
#### Query Parameters

- `digest`: the SHA256 digest of the blob

#### Examples

##### Request

```shell
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
```

##### Response

Return 200 OK if the blob exists, 404 Not Found if it does not.

Michael Yang's avatar
Michael Yang committed
1046
### Create a Blob
Michael Yang's avatar
Michael Yang committed
1047
1048
1049
1050
1051

```shell
POST /api/blobs/:digest
```

Matt Williams's avatar
Matt Williams committed
1052
Create a blob from a file on the server. Returns the server file path.
Michael Yang's avatar
Michael Yang committed
1053

Michael Yang's avatar
Michael Yang committed
1054
#### Query Parameters
Michael Yang's avatar
Michael Yang committed
1055
1056
1057

- `digest`: the expected SHA256 digest of the file

Michael Yang's avatar
Michael Yang committed
1058
#### Examples
Michael Yang's avatar
Michael Yang committed
1059

Michael Yang's avatar
Michael Yang committed
1060
1061
##### Request

Michael Yang's avatar
Michael Yang committed
1062
```shell
Michael Yang's avatar
Michael Yang committed
1063
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
Michael Yang's avatar
Michael Yang committed
1064
1065
```

Michael Yang's avatar
Michael Yang committed
1066
##### Response
Michael Yang's avatar
Michael Yang committed
1067

Matt Williams's avatar
Matt Williams committed
1068
Return 201 Created if the blob was successfully created, 400 Bad Request if the digest used is not expected.
Michael Yang's avatar
Michael Yang committed
1069

1070
## List Local Models
Matt Williams's avatar
Matt Williams committed
1071

Matt Williams's avatar
Matt Williams committed
1072
```shell
1073
GET /api/tags
Matt Williams's avatar
Matt Williams committed
1074
1075
```

1076
List models that are available locally.
Matt Williams's avatar
Matt Williams committed
1077

1078
1079
1080
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
1081

Matt Williams's avatar
Matt Williams committed
1082
```shell
1083
1084
curl http://localhost:11434/api/tags
```
Matt Williams's avatar
Matt Williams committed
1085

1086
#### Response
Matt Williams's avatar
Matt Williams committed
1087

1088
1089
A single JSON object will be returned.

1090
```json
Matt Williams's avatar
Matt Williams committed
1091
1092
1093
{
  "models": [
    {
Matt Williams's avatar
Matt Williams committed
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
      "name": "codellama:13b",
      "modified_at": "2023-11-04T14:56:49.277302595-07:00",
      "size": 7365960935,
      "digest": "9f438cb9cd581fc025612d27f7c1a6669ff83a8bb0ed86c94fcf4c5440555697",
      "details": {
        "format": "gguf",
        "family": "llama",
        "families": null,
        "parameter_size": "13B",
        "quantization_level": "Q4_0"
      }
1105
1106
    },
    {
1107
      "name": "llama3:latest",
Matt Williams's avatar
Matt Williams committed
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
      "modified_at": "2023-12-07T09:32:18.757212583-08:00",
      "size": 3825819519,
      "digest": "fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e",
      "details": {
        "format": "gguf",
        "family": "llama",
        "families": null,
        "parameter_size": "7B",
        "quantization_level": "Q4_0"
      }
Matt Williams's avatar
Matt Williams committed
1118
1119
    }
  ]
Matt Williams's avatar
Matt Williams committed
1120
1121
1122
}
```

Matt Williams's avatar
Matt Williams committed
1123
1124
1125
1126
1127
1128
## Show Model Information

```shell
POST /api/show
```

1129
Show information about a model including details, modelfile, template, parameters, license, system prompt.
Matt Williams's avatar
Matt Williams committed
1130
1131
1132

### Parameters

Patrick Devine's avatar
Patrick Devine committed
1133
- `model`: name of the model to show
1134
- `verbose`: (optional) if set to `true`, returns full data for verbose response fields
Matt Williams's avatar
Matt Williams committed
1135

1136
1137
1138
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
1139

1140
```shell
Matt Williams's avatar
Matt Williams committed
1141
curl http://localhost:11434/api/show -d '{
Patrick Devine's avatar
Patrick Devine committed
1142
  "model": "llama3.2"
Matt Williams's avatar
Matt Williams committed
1143
}'
Matt Williams's avatar
Matt Williams committed
1144
```
Matt Williams's avatar
Matt Williams committed
1145

1146
#### Response
Matt Williams's avatar
Matt Williams committed
1147
1148
1149

```json
{
睡觉型学渣's avatar
睡觉型学渣 committed
1150
  "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llava:latest\n\nFROM /Users/matt/.ollama/models/blobs/sha256:200765e1283640ffbd013184bf496e261032fa75b99498a9613be4e94d63ad52\nTEMPLATE \"\"\"{{ .System }}\nUSER: {{ .Prompt }}\nASSISTANT: \"\"\"\nPARAMETER num_ctx 4096\nPARAMETER stop \"\u003c/s\u003e\"\nPARAMETER stop \"USER:\"\nPARAMETER stop \"ASSISTANT:\"",
1151
1152
  "parameters": "num_keep                       24\nstop                           \"<|start_header_id|>\"\nstop                           \"<|end_header_id|>\"\nstop                           \"<|eot_id|>\"",
  "template": "{{ if .System }}<|start_header_id|>system<|end_header_id|>\n\n{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>\n\n{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>\n\n{{ .Response }}<|eot_id|>",
1153
  "details": {
1154
    "parent_model": "",
1155
    "format": "gguf",
Matt Williams's avatar
Matt Williams committed
1156
    "family": "llama",
1157
1158
1159
1160
    "families": [
      "llama"
    ],
    "parameter_size": "8.0B",
1161
    "quantization_level": "Q4_0"
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
  },
  "model_info": {
    "general.architecture": "llama",
    "general.file_type": 2,
    "general.parameter_count": 8030261248,
    "general.quantization_version": 2,
    "llama.attention.head_count": 32,
    "llama.attention.head_count_kv": 8,
    "llama.attention.layer_norm_rms_epsilon": 0.00001,
    "llama.block_count": 32,
    "llama.context_length": 8192,
    "llama.embedding_length": 4096,
    "llama.feed_forward_length": 14336,
    "llama.rope.dimension_count": 128,
    "llama.rope.freq_base": 500000,
    "llama.vocab_size": 128256,
    "tokenizer.ggml.bos_token_id": 128000,
    "tokenizer.ggml.eos_token_id": 128009,
    "tokenizer.ggml.merges": [],            // populates if `verbose=true`
    "tokenizer.ggml.model": "gpt2",
    "tokenizer.ggml.pre": "llama-bpe",
    "tokenizer.ggml.token_type": [],        // populates if `verbose=true`
    "tokenizer.ggml.tokens": []             // populates if `verbose=true`
1185
  }
Matt Williams's avatar
Matt Williams committed
1186
1187
1188
1189
1190
1191
}
```

## Copy a Model

```shell
1192
POST /api/copy
Matt Williams's avatar
Matt Williams committed
1193
```
1194

1195
Copy a model. Creates a model with another name from an existing model.
Matt Williams's avatar
Matt Williams committed
1196

1197
1198
1199
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
1200

Matt Williams's avatar
Matt Williams committed
1201
```shell
1202
curl http://localhost:11434/api/copy -d '{
1203
  "source": "llama3.2",
1204
  "destination": "llama3-backup"
Matt Williams's avatar
Matt Williams committed
1205
1206
1207
}'
```

1208
#### Response
1209

Matt Williams's avatar
Matt Williams committed
1210
Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't exist.
1211

Matt Williams's avatar
Matt Williams committed
1212
## Delete a Model
Matt Williams's avatar
Matt Williams committed
1213

Matt Williams's avatar
Matt Williams committed
1214
```shell
1215
DELETE /api/delete
Matt Williams's avatar
Matt Williams committed
1216
1217
```

1218
Delete a model and its data.
Matt Williams's avatar
Matt Williams committed
1219

1220
### Parameters
Matt Williams's avatar
Matt Williams committed
1221

Patrick Devine's avatar
Patrick Devine committed
1222
- `model`: model name to delete
Matt Williams's avatar
Matt Williams committed
1223

1224
1225
1226
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
1227

Matt Williams's avatar
Matt Williams committed
1228
```shell
1229
curl -X DELETE http://localhost:11434/api/delete -d '{
Patrick Devine's avatar
Patrick Devine committed
1230
  "model": "llama3:13b"
Matt Williams's avatar
Matt Williams committed
1231
1232
1233
}'
```

1234
#### Response
1235

Matt Williams's avatar
Matt Williams committed
1236
Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't exist.
1237

1238
## Pull a Model
Matt Williams's avatar
Matt Williams committed
1239

Matt Williams's avatar
Matt Williams committed
1240
```shell
1241
1242
1243
POST /api/pull
```

Matt Williams's avatar
Matt Williams committed
1244
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
Matt Williams's avatar
Matt Williams committed
1245

1246
### Parameters
Matt Williams's avatar
Matt Williams committed
1247

Patrick Devine's avatar
Patrick Devine committed
1248
- `model`: name of the model to pull
Matt Williams's avatar
Matt Williams committed
1249
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
1250
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
1251

1252
1253
1254
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
1255

Matt Williams's avatar
Matt Williams committed
1256
```shell
1257
curl http://localhost:11434/api/pull -d '{
Patrick Devine's avatar
Patrick Devine committed
1258
  "model": "llama3.2"
1259
}'
Matt Williams's avatar
Matt Williams committed
1260
1261
```

1262
#### Response
1263

1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:

The first object is the manifest:

```json
{
  "status": "pulling manifest"
}
```

Then there is a series of downloading responses. Until any of the download is completed, the `completed` key may not be included. The number of files to be downloaded depends on the number of layers specified in the manifest.

1276
```json
Matt Williams's avatar
Matt Williams committed
1277
{
1278
1279
  "status": "downloading digestname",
  "digest": "digestname",
1280
  "total": 2142590208,
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
  "completed": 241970
}
```

After all the files are downloaded, the final responses are:

```json
{
    "status": "verifying sha256 digest"
}
{
    "status": "writing manifest"
}
{
    "status": "removing any unused layers"
}
{
    "status": "success"
}
```

if `stream` is set to false, then the response is a single JSON object:

```json
{
  "status": "success"
Matt Williams's avatar
Matt Williams committed
1307
1308
}
```
1309

Matt Williams's avatar
Matt Williams committed
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
## Push a Model

```shell
POST /api/push
```

Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.

### Parameters

Patrick Devine's avatar
Patrick Devine committed
1320
- `model`: name of the model to push in the form of `<namespace>/<model>:<tag>`
1321
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
1322
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
1323

1324
1325
1326
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
1327
1328

```shell
1329
curl http://localhost:11434/api/push -d '{
Patrick Devine's avatar
Patrick Devine committed
1330
  "model": "mattw/pygmalion:latest"
Matt Williams's avatar
Matt Williams committed
1331
1332
1333
}'
```

1334
#### Response
1335

1336
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
1337
1338

```json
1339
{ "status": "retrieving manifest" }
1340
```
Matt Williams's avatar
Matt Williams committed
1341
1342
1343
1344
1345

and then:

```json
{
1346
1347
1348
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
Matt Williams's avatar
Matt Williams committed
1349
1350
1351
1352
1353
1354
1355
}
```

Then there is a series of uploading responses:

```json
{
1356
1357
1358
1359
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
}
Matt Williams's avatar
Matt Williams committed
1360
1361
1362
1363
1364
1365
1366
1367
1368
```

Finally, when the upload is complete:

```json
{"status":"pushing manifest"}
{"status":"success"}
```

1369
1370
1371
If `stream` is set to `false`, then the response is a single JSON object:

```json
1372
{ "status": "success" }
1373
1374
```

Matt Williams's avatar
Matt Williams committed
1375
1376
1377
## Generate Embeddings

```shell
royjhan's avatar
royjhan committed
1378
POST /api/embed
1379
1380
1381
1382
1383
1384
1385
```

Generate embeddings from a model

### Parameters

- `model`: name of model to generate embeddings from
royjhan's avatar
royjhan committed
1386
- `input`: text or list of text to generate embeddings for
1387

1388
1389
Advanced parameters:

royjhan's avatar
royjhan committed
1390
- `truncate`: truncates the end of each input to fit within context length. Returns error if `false` and context length is exceeded. Defaults to `true`
1391
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
Bruce MacDonald's avatar
Bruce MacDonald committed
1392
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
1393

1394
1395
1396
### Examples

#### Request
1397

Matt Williams's avatar
Matt Williams committed
1398
```shell
royjhan's avatar
royjhan committed
1399
curl http://localhost:11434/api/embed -d '{
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1400
  "model": "all-minilm",
royjhan's avatar
royjhan committed
1401
  "input": "Why is the sky blue?"
1402
1403
1404
}'
```

1405
#### Response
1406
1407
1408

```json
{
royjhan's avatar
royjhan committed
1409
1410
1411
1412
  "model": "all-minilm",
  "embeddings": [[
    0.010071029, -0.0017594862, 0.05007221, 0.04692972, 0.054916814,
    0.008599704, 0.105441414, -0.025878139, 0.12958129, 0.031952348
royjhan's avatar
royjhan committed
1413
1414
1415
1416
  ]],
  "total_duration": 14143917,
  "load_duration": 1019500,
  "prompt_eval_count": 8
royjhan's avatar
royjhan committed
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
}
```

#### Request (Multiple input)

```shell
curl http://localhost:11434/api/embed -d '{
  "model": "all-minilm",
  "input": ["Why is the sky blue?", "Why is the grass green?"]
}'
```

#### Response

```json
{
  "model": "all-minilm",
  "embeddings": [[
    0.010071029, -0.0017594862, 0.05007221, 0.04692972, 0.054916814,
    0.008599704, 0.105441414, -0.025878139, 0.12958129, 0.031952348
  ],[
    -0.0098027075, 0.06042469, 0.025257962, -0.006364387, 0.07272725,
    0.017194884, 0.09032035, -0.051705178, 0.09951512, 0.09072481
  ]]
Costa Alexoglou's avatar
Costa Alexoglou committed
1441
1442
}
```
royjhan's avatar
royjhan committed
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453

## List Running Models
```shell
GET /api/ps
```

List models that are currently loaded into memory.

#### Examples

### Request
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1454

royjhan's avatar
royjhan committed
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
```shell
curl http://localhost:11434/api/ps
```

#### Response

A single JSON object will be returned.

```json
{
  "models": [
    {
      "name": "mistral:latest",
      "model": "mistral:latest",
      "size": 5137025024,
      "digest": "2ae6f6dd7a3dd734790bbbf58b8909a606e0e7e97e94b7604e0aa7ae4490e6d8",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "llama",
        "families": [
          "llama"
        ],
        "parameter_size": "7.2B",
        "quantization_level": "Q4_0"
      },
      "expires_at": "2024-06-04T14:38:31.83753-07:00",
      "size_vram": 5137025024
    }
  ]
}
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1486
```
royjhan's avatar
royjhan committed
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527

## Generate Embedding

> Note: this endpoint has been superseded by `/api/embed`

```shell
POST /api/embeddings
```

Generate embeddings from a model

### Parameters

- `model`: name of model to generate embeddings from
- `prompt`: text to generate embeddings for

Advanced parameters:

- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)

### Examples

#### Request

```shell
curl http://localhost:11434/api/embeddings -d '{
  "model": "all-minilm",
  "prompt": "Here is an article about llamas..."
}'
```

#### Response

```json
{
  "embedding": [
    0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
    0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
  ]
}
Jeffrey Morgan's avatar
Jeffrey Morgan committed
1528
```