api.md 15.7 KB
Newer Older
1
2
# API

3
4
5
## Endpoints

- [Generate a completion](#generate-a-completion)
Matt Williams's avatar
Matt Williams committed
6
7
8
9
10
11
12
13
14
- [Create a Model](#create-a-model)
- [List Local Models](#list-local-models)
- [Show Model Information](#show-model-information)
- [Copy a Model](#copy-a-model)
- [Delete a Model](#delete-a-model)
- [Pull a Model](#pull-a-model)
- [Push a Model](#push-a-model)
- [Generate Embeddings](#generate-embeddings)

15
## Conventions
Matt Williams's avatar
Matt Williams committed
16

17
### Model names
Matt Williams's avatar
Matt Williams committed
18

Matt Williams's avatar
Matt Williams committed
19
Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.
Matt Williams's avatar
Matt Williams committed
20
21
22

### Durations

23
All durations are returned in nanoseconds.
Matt Williams's avatar
Matt Williams committed
24

25
26
### Streaming responses

27
Certain endpoints stream responses as JSON objects delineated with the newline (`\n`) character.
28

29
## Generate a completion
Matt Williams's avatar
Matt Williams committed
30

Matt Williams's avatar
Matt Williams committed
31
```shell
32
33
POST /api/generate
```
34

35
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so will be a series of responses. The final response object will include statistics and additional data from the request.
Matt Williams's avatar
Matt Williams committed
36

37
### Parameters
Matt Williams's avatar
Matt Williams committed
38

39
40
- `model`: (required) the [model name](#model-names)
- `prompt`: the prompt to generate a response for
Matt Williams's avatar
Matt Williams committed
41

42
Advanced parameters (optional):
Matt Williams's avatar
Matt Williams committed
43

44
- `format`: the format to return a response in. Currently the only accepted value is `json`
45
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
Bruce MacDonald's avatar
Bruce MacDonald committed
46
- `system`: system prompt to (overrides what is defined in the `Modelfile`)
47
48
- `template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
49
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
50
- `raw`: if `true` no formatting will be applied to the prompt and no context will be returned. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API, and are managing history yourself.
51

52
53
### JSON mode

54
55
56
Enable JSON mode by setting the `format` parameter to `json`. This will structure the response as valid JSON. See the JSON mode [example](#request-json-mode) below.

> Note: it's important to instruct the model to use JSON in the `prompt`. Otherwise, the model may generate large amounts whitespace.
57

58
59
### Examples

60
#### Request
Matt Williams's avatar
Matt Williams committed
61

Matt Williams's avatar
Matt Williams committed
62
```shell
63
curl http://localhost:11434/api/generate -d '{
64
  "model": "llama2",
65
66
67
  "prompt": "Why is the sky blue?"
}'
```
Matt Williams's avatar
Matt Williams committed
68

69
#### Response
Matt Williams's avatar
Matt Williams committed
70

71
A stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
72

73
```json
Matt Williams's avatar
Matt Williams committed
74
{
75
  "model": "llama2",
76
77
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "response": "The",
Matt Williams's avatar
Matt Williams committed
78
79
80
81
  "done": false
}
```

82
The final response in the stream also includes additional data about the generation:
Matt Williams's avatar
Matt Williams committed
83

84
85
86
87
88
89
90
91
- `total_duration`: time spent generating the response
- `load_duration`: time spent in nanoseconds loading the model
- `sample_count`: number of samples generated
- `sample_duration`: time spent generating samples
- `prompt_eval_count`: number of tokens in the prompt
- `prompt_eval_duration`: time spent in nanoseconds evaluating the prompt
- `eval_count`: number of tokens the response
- `eval_duration`: time in nanoseconds spent generating the response
92
- `context`: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
93
- `response`: empty if the response was streamed, if not streamed, this will contain the full response
94
95
96

To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration`.

97
```json
Matt Williams's avatar
Matt Williams committed
98
{
99
  "model": "llama2",
100
  "created_at": "2023-08-04T19:22:45.499127Z",
101
  "response": "",
Bruce MacDonald's avatar
Bruce MacDonald committed
102
  "context": [1, 2, 3],
103
104
105
106
107
108
109
110
111
112
  "done": true,
  "total_duration": 5589157167,
  "load_duration": 3013701500,
  "sample_count": 114,
  "sample_duration": 81442000,
  "prompt_eval_count": 46,
  "prompt_eval_duration": 1160282000,
  "eval_count": 113,
  "eval_duration": 1325948000
}
Matt Williams's avatar
Matt Williams committed
113
114
```

115
#### Request (No streaming)
116
117

```shell
118
curl http://localhost:11434/api/generate -d '{
119
  "model": "llama2",
120
121
122
123
124
125
126
  "prompt": "Why is the sky blue?",
  "stream": false
}'
```

#### Response

127
128
129
130
If `stream` is set to `false`, the response will be a single JSON object:

```json
{
131
  "model": "llama2",
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
  "context": [1, 2, 3],
  "done": true,
  "total_duration": 5589157167,
  "load_duration": 3013701500,
  "sample_count": 114,
  "sample_duration": 81442000,
  "prompt_eval_count": 46,
  "prompt_eval_duration": 1160282000,
  "eval_count": 13,
  "eval_duration": 1325948000
}
```

147
#### Request (Raw mode)
148

149
In some cases you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable formatting and context.
150
151

```shell
152
curl http://localhost:11434/api/generate -d '{
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
  "model": "mistral",
  "prompt": "[INST] why is the sky blue? [/INST]",
  "raw": true,
  "stream": false
}'
```

#### Response

```json
{
  "model": "mistral",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
  "done": true,
  "total_duration": 14648695333,
  "load_duration": 3302671417,
  "prompt_eval_count": 14,
  "prompt_eval_duration": 286243000,
  "eval_count": 129,
  "eval_duration": 10931424000
}
```

177
178
179
#### Request (JSON mode)

```shell
180
curl http://localhost:11434/api/generate -d '{
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
  "model": "llama2",
  "prompt": "What color is the sky at different times of the day? Respond using JSON",
  "format": "json",
  "stream": false
}'
```

#### Response

```json
{
  "model": "llama2",
  "created_at": "2023-11-09T21:07:55.186497Z",
  "response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
  "done": true,
  "total_duration": 4661289125,
  "load_duration": 1714434500,
  "prompt_eval_count": 36,
  "prompt_eval_duration": 264132000,
  "eval_count": 75,
  "eval_duration": 2112149000
}
```

The value of `response` will be a string containing JSON similar to:

```json
{
  "morning": {
    "color": "blue"
  },
  "noon": {
    "color": "blue-gray"
  },
  "afternoon": {
    "color": "warm gray"
  },
  "evening": {
    "color": "orange"
  }
}
```

#### Request (With options)
225
226
227
228

If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.

```shell
229
curl http://localhost:11434/api/generate -d '{
230
  "model": "llama2",
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
  "prompt": "Why is the sky blue?",
  "stream": false,
  "options": {
    "num_keep": 5,
    "seed": 42,
    "num_predict": 100,
    "top_k": 20,
    "top_p": 0.9,
    "tfs_z": 0.5,
    "typical_p": 0.7,
    "repeat_last_n": 33,
    "temperature": 0.8,
    "repeat_penalty": 1.2,
    "presence_penalty": 1.5,
    "frequency_penalty": 1.0,
    "mirostat": 1,
    "mirostat_tau": 0.8,
    "mirostat_eta": 0.6,
    "penalize_newline": true,
    "stop": ["\n", "user:"],
    "numa": false,
    "num_ctx": 4,
    "num_batch": 2,
    "num_gqa": 1,
    "num_gpu": 1,
    "main_gpu": 0,
    "low_vram": false,
    "f16_kv": true,
    "logits_all": false,
    "vocab_only": false,
    "use_mmap": true,
    "use_mlock": false,
    "embedding_only": false,
    "rope_frequency_base": 1.1,
    "rope_frequency_scale": 0.8,
    "num_thread": 8
    }
}'
```

#### Response

```json
{
275
  "model": "llama2",
276
277
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
278
  "context": [1, 2, 3],
279
280
281
282
283
284
285
286
287
288
289
290
  "done": true,
  "total_duration": 5589157167,
  "load_duration": 3013701500,
  "sample_count": 114,
  "sample_duration": 81442000,
  "prompt_eval_count": 46,
  "prompt_eval_duration": 1160282000,
  "eval_count": 13,
  "eval_duration": 1325948000
}
```

291
## Create a Model
Matt Williams's avatar
Matt Williams committed
292

Matt Williams's avatar
Matt Williams committed
293
```shell
294
295
296
POST /api/create
```

Michael Yang's avatar
Michael Yang committed
297
Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation should also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using [Create a Blob](#create-a-blob) and the value to the path indicated in the response.
298

299
### Parameters
Matt Williams's avatar
Matt Williams committed
300

301
- `name`: name of the model to create
Jeffrey Morgan's avatar
Jeffrey Morgan committed
302
- `modelfile` (optional): contents of the Modelfile
303
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Jeffrey Morgan's avatar
Jeffrey Morgan committed
304
- `path` (optional): path to the Modelfile
Matt Williams's avatar
Matt Williams committed
305

306
307
308
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
309

Matt Williams's avatar
Matt Williams committed
310
```shell
311
curl http://localhost:11434/api/create -d '{
312
  "name": "mario",
313
  "modelfile": "FROM llama2\nSYSTEM You are mario from Super Mario Bros."
314
}'
Matt Williams's avatar
Matt Williams committed
315
316
```

317
#### Response
Matt Williams's avatar
Matt Williams committed
318

Matt Williams's avatar
Matt Williams committed
319
A stream of JSON objects. When finished, `status` is `success`.
Matt Williams's avatar
Matt Williams committed
320

321
```json
Matt Williams's avatar
Matt Williams committed
322
323
324
325
326
{
  "status": "parsing modelfile"
}
```

Michael Yang's avatar
Michael Yang committed
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
### Check if a Blob Exists

```shell
HEAD /api/blobs/:digest
```

Check if a blob is known to the server.

#### Query Parameters

- `digest`: the SHA256 digest of the blob

#### Examples

##### Request

```shell
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
```

##### Response

Return 200 OK if the blob exists, 404 Not Found if it does not.

Michael Yang's avatar
Michael Yang committed
351
### Create a Blob
Michael Yang's avatar
Michael Yang committed
352
353
354
355
356
357
358

```shell
POST /api/blobs/:digest
```

Create a blob from a file. Returns the server file path.

Michael Yang's avatar
Michael Yang committed
359
#### Query Parameters
Michael Yang's avatar
Michael Yang committed
360
361
362

- `digest`: the expected SHA256 digest of the file

Michael Yang's avatar
Michael Yang committed
363
#### Examples
Michael Yang's avatar
Michael Yang committed
364

Michael Yang's avatar
Michael Yang committed
365
366
##### Request

Michael Yang's avatar
Michael Yang committed
367
```shell
Michael Yang's avatar
Michael Yang committed
368
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
Michael Yang's avatar
Michael Yang committed
369
370
```

Michael Yang's avatar
Michael Yang committed
371
##### Response
Michael Yang's avatar
Michael Yang committed
372

Michael Yang's avatar
Michael Yang committed
373
Return 201 Created if the blob was successfully created.
Michael Yang's avatar
Michael Yang committed
374

375
## List Local Models
Matt Williams's avatar
Matt Williams committed
376

Matt Williams's avatar
Matt Williams committed
377
```shell
378
GET /api/tags
Matt Williams's avatar
Matt Williams committed
379
380
```

381
List models that are available locally.
Matt Williams's avatar
Matt Williams committed
382

383
384
385
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
386

Matt Williams's avatar
Matt Williams committed
387
```shell
388
389
curl http://localhost:11434/api/tags
```
Matt Williams's avatar
Matt Williams committed
390

391
#### Response
Matt Williams's avatar
Matt Williams committed
392

393
394
A single JSON object will be returned.

395
```json
Matt Williams's avatar
Matt Williams committed
396
397
398
{
  "models": [
    {
399
      "name": "llama2",
400
401
402
403
404
405
      "modified_at": "2023-08-02T17:02:23.713454393-07:00",
      "size": 3791730596
    },
    {
      "name": "llama2:13b",
      "modified_at": "2023-08-08T12:08:38.093596297-07:00",
Matt Williams's avatar
Matt Williams committed
406
      "size": 7323310500
Matt Williams's avatar
Matt Williams committed
407
408
    }
  ]
Matt Williams's avatar
Matt Williams committed
409
410
411
}
```

Matt Williams's avatar
Matt Williams committed
412
413
414
415
416
417
418
419
420
421
422
## Show Model Information

```shell
POST /api/show
```

Show details about a model including modelfile, template, parameters, license, and system prompt.

### Parameters

- `name`: name of the model to show
Matt Williams's avatar
Matt Williams committed
423

424
425
426
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
427

428
```shell
Matt Williams's avatar
Matt Williams committed
429
curl http://localhost:11434/api/show -d '{
430
  "name": "llama2"
Matt Williams's avatar
Matt Williams committed
431
}'
Matt Williams's avatar
Matt Williams committed
432
```
Matt Williams's avatar
Matt Williams committed
433

434
#### Response
Matt Williams's avatar
Matt Williams committed
435
436
437

```json
{
438
439
440
441
  "license": "<contents of license block>",
  "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llama2:latest\n\nFROM /Users/username/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8\nTEMPLATE \"\"\"[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] \"\"\"\nSYSTEM \"\"\"\"\"\"\nPARAMETER stop [INST]\nPARAMETER stop [/INST]\nPARAMETER stop <<SYS>>\nPARAMETER stop <</SYS>>\n",
  "parameters": "stop                           [INST]\nstop                           [/INST]\nstop                           <<SYS>>\nstop                           <</SYS>>",
  "template": "[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] "
Matt Williams's avatar
Matt Williams committed
442
443
444
445
446
447
}
```

## Copy a Model

```shell
448
POST /api/copy
Matt Williams's avatar
Matt Williams committed
449
```
450

451
Copy a model. Creates a model with another name from an existing model.
Matt Williams's avatar
Matt Williams committed
452

453
454
455
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
456

Matt Williams's avatar
Matt Williams committed
457
```shell
458
curl http://localhost:11434/api/copy -d '{
459
  "source": "llama2",
460
  "destination": "llama2-backup"
Matt Williams's avatar
Matt Williams committed
461
462
463
}'
```

464
#### Response
465
466
467

The only response is a 200 OK if successful.

Matt Williams's avatar
Matt Williams committed
468
## Delete a Model
Matt Williams's avatar
Matt Williams committed
469

Matt Williams's avatar
Matt Williams committed
470
```shell
471
DELETE /api/delete
Matt Williams's avatar
Matt Williams committed
472
473
```

474
Delete a model and its data.
Matt Williams's avatar
Matt Williams committed
475

476
### Parameters
Matt Williams's avatar
Matt Williams committed
477

478
- `name`: model name to delete
Matt Williams's avatar
Matt Williams committed
479

480
481
482
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
483

Matt Williams's avatar
Matt Williams committed
484
```shell
485
486
curl -X DELETE http://localhost:11434/api/delete -d '{
  "name": "llama2:13b"
Matt Williams's avatar
Matt Williams committed
487
488
489
}'
```

490
#### Response
491
492
493

If successful, the only response is a 200 OK.

494
## Pull a Model
Matt Williams's avatar
Matt Williams committed
495

Matt Williams's avatar
Matt Williams committed
496
```shell
497
498
499
POST /api/pull
```

Matt Williams's avatar
Matt Williams committed
500
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
Matt Williams's avatar
Matt Williams committed
501

502
### Parameters
Matt Williams's avatar
Matt Williams committed
503

504
- `name`: name of the model to pull
Matt Williams's avatar
Matt Williams committed
505
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
506
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
507

508
509
510
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
511

Matt Williams's avatar
Matt Williams committed
512
```shell
513
curl http://localhost:11434/api/pull -d '{
514
  "name": "llama2"
515
}'
Matt Williams's avatar
Matt Williams committed
516
517
```

518
#### Response
519

520
521
522
523
524
525
526
527
528
529
530
531
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:

The first object is the manifest:

```json
{
  "status": "pulling manifest"
}
```

Then there is a series of downloading responses. Until any of the download is completed, the `completed` key may not be included. The number of files to be downloaded depends on the number of layers specified in the manifest.

532
```json
Matt Williams's avatar
Matt Williams committed
533
{
534
535
  "status": "downloading digestname",
  "digest": "digestname",
536
  "total": 2142590208,
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
  "completed": 241970
}
```

After all the files are downloaded, the final responses are:

```json
{
    "status": "verifying sha256 digest"
}
{
    "status": "writing manifest"
}
{
    "status": "removing any unused layers"
}
{
    "status": "success"
}
```

if `stream` is set to false, then the response is a single JSON object:

```json
{
  "status": "success"
Matt Williams's avatar
Matt Williams committed
563
564
}
```
565

Matt Williams's avatar
Matt Williams committed
566
567
568
569
570
571
572
573
574
575
576
## Push a Model

```shell
POST /api/push
```

Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.

### Parameters

- `name`: name of the model to push in the form of `<namespace>/<model>:<tag>`
577
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
578
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
579

580
581
582
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
583
584

```shell
585
curl http://localhost:11434/api/push -d '{
Matt Williams's avatar
Matt Williams committed
586
587
588
589
  "name": "mattw/pygmalion:latest"
}'
```

590
#### Response
591

592
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
593
594

```json
595
{ "status": "retrieving manifest" }
596
```
Matt Williams's avatar
Matt Williams committed
597
598
599
600
601

and then:

```json
{
602
603
604
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
Matt Williams's avatar
Matt Williams committed
605
606
607
608
609
610
611
}
```

Then there is a series of uploading responses:

```json
{
612
613
614
615
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
}
Matt Williams's avatar
Matt Williams committed
616
617
618
619
620
621
622
623
624
```

Finally, when the upload is complete:

```json
{"status":"pushing manifest"}
{"status":"success"}
```

625
626
627
If `stream` is set to `false`, then the response is a single JSON object:

```json
628
{ "status": "success" }
629
630
```

Matt Williams's avatar
Matt Williams committed
631
632
633
## Generate Embeddings

```shell
634
635
636
637
638
639
640
641
642
643
POST /api/embeddings
```

Generate embeddings from a model

### Parameters

- `model`: name of model to generate embeddings from
- `prompt`: text to generate embeddings for

644
645
646
647
Advanced parameters:

- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`

648
649
650
### Examples

#### Request
651

Matt Williams's avatar
Matt Williams committed
652
```shell
653
curl http://localhost:11434/api/embeddings -d '{
654
  "model": "llama2",
655
656
657
658
  "prompt": "Here is an article about llamas..."
}'
```

659
#### Response
660
661
662

```json
{
Alexander F. Rødseth's avatar
Alexander F. Rødseth committed
663
  "embedding": [
664
665
666
    0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
    0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
  ]
Costa Alexoglou's avatar
Costa Alexoglou committed
667
668
}
```