api.md 15.7 KB
Newer Older
1
2
# API

3
4
5
## Endpoints

- [Generate a completion](#generate-a-completion)
Matt Williams's avatar
Matt Williams committed
6
7
8
9
10
11
12
13
14
- [Create a Model](#create-a-model)
- [List Local Models](#list-local-models)
- [Show Model Information](#show-model-information)
- [Copy a Model](#copy-a-model)
- [Delete a Model](#delete-a-model)
- [Pull a Model](#pull-a-model)
- [Push a Model](#push-a-model)
- [Generate Embeddings](#generate-embeddings)

15
## Conventions
Matt Williams's avatar
Matt Williams committed
16

17
### Model names
Matt Williams's avatar
Matt Williams committed
18

Matt Williams's avatar
Matt Williams committed
19
Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.
Matt Williams's avatar
Matt Williams committed
20
21
22

### Durations

23
All durations are returned in nanoseconds.
Matt Williams's avatar
Matt Williams committed
24

25
26
27
28
### Streaming responses

Certain endpoints stream responses as JSON objects delineated with the newline (`\n`) character.

29
## Generate a completion
Matt Williams's avatar
Matt Williams committed
30

Matt Williams's avatar
Matt Williams committed
31
```shell
32
33
POST /api/generate
```
34

35
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so will be a series of responses. The final response object will include statistics and additional data from the request.
Matt Williams's avatar
Matt Williams committed
36

37
### Parameters
Matt Williams's avatar
Matt Williams committed
38

39
40
- `model`: (required) the [model name](#model-names)
- `prompt`: the prompt to generate a response for
Matt Williams's avatar
Matt Williams committed
41

42
Advanced parameters (optional):
Matt Williams's avatar
Matt Williams committed
43

44
- `format`: the format to return a response in. Currently the only accepted value is `json`
45
46
47
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
- `system`: system prompt to (overrides what is defined in the `Modelfile`)
- `template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
Bruce MacDonald's avatar
Bruce MacDonald committed
48
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
49
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
50
- `raw`: if `true` no formatting will be applied to the prompt and no context will be returned. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API, and are managing history yourself.
51

52
53
54
55
### JSON mode

Enable JSON mode by setting the `format` parameter to `json` and specifying the model should use JSON in the `prompt`. This will structure the response as valid JSON. See the JSON mode [example](#request-json-mode) below.

56
57
58
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
59

Matt Williams's avatar
Matt Williams committed
60
```shell
61
curl -X POST http://localhost:11434/api/generate -d '{
62
  "model": "llama2",
63
64
65
  "prompt": "Why is the sky blue?"
}'
```
Matt Williams's avatar
Matt Williams committed
66

67
#### Response
Matt Williams's avatar
Matt Williams committed
68

69
A stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
70

71
```json
Matt Williams's avatar
Matt Williams committed
72
{
73
  "model": "llama2",
74
75
  "created_at": "2023-08-04T08:52:19.385406455-07:00",
  "response": "The",
Matt Williams's avatar
Matt Williams committed
76
77
78
79
  "done": false
}
```

80
The final response in the stream also includes additional data about the generation:
Matt Williams's avatar
Matt Williams committed
81

82
83
84
85
86
87
88
89
- `total_duration`: time spent generating the response
- `load_duration`: time spent in nanoseconds loading the model
- `sample_count`: number of samples generated
- `sample_duration`: time spent generating samples
- `prompt_eval_count`: number of tokens in the prompt
- `prompt_eval_duration`: time spent in nanoseconds evaluating the prompt
- `eval_count`: number of tokens the response
- `eval_duration`: time in nanoseconds spent generating the response
Bruce MacDonald's avatar
Bruce MacDonald committed
90
- `context`: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
91
- `response`: empty if the response was streamed, if not streamed, this will contain the full response
92
93
94

To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration`.

95
```json
Matt Williams's avatar
Matt Williams committed
96
{
97
  "model": "llama2",
98
  "created_at": "2023-08-04T19:22:45.499127Z",
99
  "response": "",
Bruce MacDonald's avatar
Bruce MacDonald committed
100
  "context": [1, 2, 3],
101
102
103
104
105
106
107
108
109
110
  "done": true,
  "total_duration": 5589157167,
  "load_duration": 3013701500,
  "sample_count": 114,
  "sample_duration": 81442000,
  "prompt_eval_count": 46,
  "prompt_eval_duration": 1160282000,
  "eval_count": 113,
  "eval_duration": 1325948000
}
Matt Williams's avatar
Matt Williams committed
111
112
```

113
#### Request (No streaming)
114
115
116

```shell
curl -X POST http://localhost:11434/api/generate -d '{
117
  "model": "llama2",
118
119
120
121
122
123
124
  "prompt": "Why is the sky blue?",
  "stream": false
}'
```

#### Response

125
126
127
128
If `stream` is set to `false`, the response will be a single JSON object:

```json
{
129
  "model": "llama2",
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
  "context": [1, 2, 3],
  "done": true,
  "total_duration": 5589157167,
  "load_duration": 3013701500,
  "sample_count": 114,
  "sample_duration": 81442000,
  "prompt_eval_count": 46,
  "prompt_eval_duration": 1160282000,
  "eval_count": 13,
  "eval_duration": 1325948000
}
```

145
#### Request (Raw mode)
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174

In some cases you may wish to bypass the templating system and provide a full prompt. In this case, you can use the `raw` parameter to disable formatting and context.

```shell
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "mistral",
  "prompt": "[INST] why is the sky blue? [/INST]",
  "raw": true,
  "stream": false
}'
```

#### Response

```json
{
  "model": "mistral",
  "created_at": "2023-11-03T15:36:02.583064Z",
  "response": " The sky appears blue because of a phenomenon called Rayleigh scattering.",
  "done": true,
  "total_duration": 14648695333,
  "load_duration": 3302671417,
  "prompt_eval_count": 14,
  "prompt_eval_duration": 286243000,
  "eval_count": 129,
  "eval_duration": 10931424000
}
```

175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
#### Request (JSON mode)

```shell
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt": "What color is the sky at different times of the day? Respond using JSON",
  "format": "json",
  "stream": false
}'
```

#### Response

```json
{
  "model": "llama2",
  "created_at": "2023-11-09T21:07:55.186497Z",
  "response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
  "done": true,
  "total_duration": 4661289125,
  "load_duration": 1714434500,
  "prompt_eval_count": 36,
  "prompt_eval_duration": 264132000,
  "eval_count": 75,
  "eval_duration": 2112149000
}
```

The value of `response` will be a string containing JSON similar to:

```json
{
  "morning": {
    "color": "blue"
  },
  "noon": {
    "color": "blue-gray"
  },
  "afternoon": {
    "color": "warm gray"
  },
  "evening": {
    "color": "orange"
  }
}
```

#### Request (With options)
223
224
225
226
227

If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the `options` parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.

```shell
curl -X POST http://localhost:11434/api/generate -d '{
228
  "model": "llama2",
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
  "prompt": "Why is the sky blue?",
  "stream": false,
  "options": {
    "num_keep": 5,
    "seed": 42,
    "num_predict": 100,
    "top_k": 20,
    "top_p": 0.9,
    "tfs_z": 0.5,
    "typical_p": 0.7,
    "repeat_last_n": 33,
    "temperature": 0.8,
    "repeat_penalty": 1.2,
    "presence_penalty": 1.5,
    "frequency_penalty": 1.0,
    "mirostat": 1,
    "mirostat_tau": 0.8,
    "mirostat_eta": 0.6,
    "penalize_newline": true,
    "stop": ["\n", "user:"],
    "numa": false,
    "num_ctx": 4,
    "num_batch": 2,
    "num_gqa": 1,
    "num_gpu": 1,
    "main_gpu": 0,
    "low_vram": false,
    "f16_kv": true,
    "logits_all": false,
    "vocab_only": false,
    "use_mmap": true,
    "use_mlock": false,
    "embedding_only": false,
    "rope_frequency_base": 1.1,
    "rope_frequency_scale": 0.8,
    "num_thread": 8
    }
}'
```

#### Response

```json
{
273
  "model": "llama2",
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
  "created_at": "2023-08-04T19:22:45.499127Z",
  "response": "The sky is blue because it is the color of the sky.",
  "context": [1, 2, 3],
  "done": true,
  "total_duration": 5589157167,
  "load_duration": 3013701500,
  "sample_count": 114,
  "sample_duration": 81442000,
  "prompt_eval_count": 46,
  "prompt_eval_duration": 1160282000,
  "eval_count": 13,
  "eval_duration": 1325948000
}
```

289
## Create a Model
Matt Williams's avatar
Matt Williams committed
290

Matt Williams's avatar
Matt Williams committed
291
```shell
292
293
294
POST /api/create
```

Michael Yang's avatar
Michael Yang committed
295
Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation should also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using [Create a Blob](#create-a-blob) and the value to the path indicated in the response.
296

297
### Parameters
Matt Williams's avatar
Matt Williams committed
298

299
- `name`: name of the model to create
Michael Yang's avatar
Michael Yang committed
300
- `path`: path to the Modelfile (deprecated: please use modelfile instead)
Michael Yang's avatar
Michael Yang committed
301
- `modelfile`: contents of the Modelfile
302
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
303

304
305
306
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
307

Matt Williams's avatar
Matt Williams committed
308
```shell
309
310
curl -X POST http://localhost:11434/api/create -d '{
  "name": "mario",
Michael Yang's avatar
Michael Yang committed
311
312
  "path": "~/Modelfile",
  "modelfile": "FROM llama2"
313
}'
Matt Williams's avatar
Matt Williams committed
314
315
```

316
#### Response
Matt Williams's avatar
Matt Williams committed
317

Matt Williams's avatar
Matt Williams committed
318
A stream of JSON objects. When finished, `status` is `success`.
Matt Williams's avatar
Matt Williams committed
319

320
```json
Matt Williams's avatar
Matt Williams committed
321
322
323
324
325
{
  "status": "parsing modelfile"
}
```

Michael Yang's avatar
Michael Yang committed
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
### Check if a Blob Exists

```shell
HEAD /api/blobs/:digest
```

Check if a blob is known to the server.

#### Query Parameters

- `digest`: the SHA256 digest of the blob

#### Examples

##### Request

```shell
curl -I http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
```

##### Response

Return 200 OK if the blob exists, 404 Not Found if it does not.

Michael Yang's avatar
Michael Yang committed
350
### Create a Blob
Michael Yang's avatar
Michael Yang committed
351
352
353
354
355
356
357

```shell
POST /api/blobs/:digest
```

Create a blob from a file. Returns the server file path.

Michael Yang's avatar
Michael Yang committed
358
#### Query Parameters
Michael Yang's avatar
Michael Yang committed
359
360
361

- `digest`: the expected SHA256 digest of the file

Michael Yang's avatar
Michael Yang committed
362
#### Examples
Michael Yang's avatar
Michael Yang committed
363

Michael Yang's avatar
Michael Yang committed
364
365
##### Request

Michael Yang's avatar
Michael Yang committed
366
```shell
Michael Yang's avatar
Michael Yang committed
367
curl -T model.bin -X POST http://localhost:11434/api/blobs/sha256:29fdb92e57cf0827ded04ae6461b5931d01fa595843f55d36f5b275a52087dd2
Michael Yang's avatar
Michael Yang committed
368
369
```

Michael Yang's avatar
Michael Yang committed
370
##### Response
Michael Yang's avatar
Michael Yang committed
371

Michael Yang's avatar
Michael Yang committed
372
Return 201 Created if the blob was successfully created.
Michael Yang's avatar
Michael Yang committed
373

374
## List Local Models
Matt Williams's avatar
Matt Williams committed
375

Matt Williams's avatar
Matt Williams committed
376
```shell
377
GET /api/tags
Matt Williams's avatar
Matt Williams committed
378
379
```

380
List models that are available locally.
Matt Williams's avatar
Matt Williams committed
381

382
383
384
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
385

Matt Williams's avatar
Matt Williams committed
386
```shell
387
388
curl http://localhost:11434/api/tags
```
Matt Williams's avatar
Matt Williams committed
389

390
#### Response
Matt Williams's avatar
Matt Williams committed
391

392
393
A single JSON object will be returned.

394
```json
Matt Williams's avatar
Matt Williams committed
395
396
397
{
  "models": [
    {
398
      "name": "llama2",
399
400
401
402
403
404
      "modified_at": "2023-08-02T17:02:23.713454393-07:00",
      "size": 3791730596
    },
    {
      "name": "llama2:13b",
      "modified_at": "2023-08-08T12:08:38.093596297-07:00",
Matt Williams's avatar
Matt Williams committed
405
      "size": 7323310500
Matt Williams's avatar
Matt Williams committed
406
407
    }
  ]
Matt Williams's avatar
Matt Williams committed
408
409
410
}
```

Matt Williams's avatar
Matt Williams committed
411
412
413
414
415
416
417
418
419
420
421
## Show Model Information

```shell
POST /api/show
```

Show details about a model including modelfile, template, parameters, license, and system prompt.

### Parameters

- `name`: name of the model to show
Matt Williams's avatar
Matt Williams committed
422

423
424
425
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
426

427
```shell
Matt Williams's avatar
Matt Williams committed
428
curl http://localhost:11434/api/show -d '{
429
  "name": "llama2"
Matt Williams's avatar
Matt Williams committed
430
}'
Matt Williams's avatar
Matt Williams committed
431
```
Matt Williams's avatar
Matt Williams committed
432

433
#### Response
Matt Williams's avatar
Matt Williams committed
434
435
436

```json
{
437
438
439
440
  "license": "<contents of license block>",
  "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llama2:latest\n\nFROM /Users/username/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8\nTEMPLATE \"\"\"[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] \"\"\"\nSYSTEM \"\"\"\"\"\"\nPARAMETER stop [INST]\nPARAMETER stop [/INST]\nPARAMETER stop <<SYS>>\nPARAMETER stop <</SYS>>\n",
  "parameters": "stop                           [INST]\nstop                           [/INST]\nstop                           <<SYS>>\nstop                           <</SYS>>",
  "template": "[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] "
Matt Williams's avatar
Matt Williams committed
441
442
443
444
445
446
}
```

## Copy a Model

```shell
447
POST /api/copy
Matt Williams's avatar
Matt Williams committed
448
```
449

450
Copy a model. Creates a model with another name from an existing model.
Matt Williams's avatar
Matt Williams committed
451

452
453
454
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
455

Matt Williams's avatar
Matt Williams committed
456
```shell
457
curl http://localhost:11434/api/copy -d '{
458
  "source": "llama2",
459
  "destination": "llama2-backup"
Matt Williams's avatar
Matt Williams committed
460
461
462
}'
```

463
#### Response
464
465
466

The only response is a 200 OK if successful.

Matt Williams's avatar
Matt Williams committed
467
## Delete a Model
Matt Williams's avatar
Matt Williams committed
468

Matt Williams's avatar
Matt Williams committed
469
```shell
470
DELETE /api/delete
Matt Williams's avatar
Matt Williams committed
471
472
```

473
Delete a model and its data.
Matt Williams's avatar
Matt Williams committed
474

475
### Parameters
Matt Williams's avatar
Matt Williams committed
476

477
- `name`: model name to delete
Matt Williams's avatar
Matt Williams committed
478

479
480
481
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
482

Matt Williams's avatar
Matt Williams committed
483
```shell
484
485
curl -X DELETE http://localhost:11434/api/delete -d '{
  "name": "llama2:13b"
Matt Williams's avatar
Matt Williams committed
486
487
488
}'
```

489
#### Response
490
491
492

If successful, the only response is a 200 OK.

493
## Pull a Model
Matt Williams's avatar
Matt Williams committed
494

Matt Williams's avatar
Matt Williams committed
495
```shell
496
497
498
POST /api/pull
```

Matt Williams's avatar
Matt Williams committed
499
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
Matt Williams's avatar
Matt Williams committed
500

501
### Parameters
Matt Williams's avatar
Matt Williams committed
502

503
- `name`: name of the model to pull
Matt Williams's avatar
Matt Williams committed
504
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
505
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
506

507
508
509
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
510

Matt Williams's avatar
Matt Williams committed
511
```shell
512
curl -X POST http://localhost:11434/api/pull -d '{
513
  "name": "llama2"
514
}'
Matt Williams's avatar
Matt Williams committed
515
516
```

517
#### Response
518

519
520
521
522
523
524
525
526
527
528
529
530
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:

The first object is the manifest:

```json
{
  "status": "pulling manifest"
}
```

Then there is a series of downloading responses. Until any of the download is completed, the `completed` key may not be included. The number of files to be downloaded depends on the number of layers specified in the manifest.

531
```json
Matt Williams's avatar
Matt Williams committed
532
{
533
534
  "status": "downloading digestname",
  "digest": "digestname",
535
  "total": 2142590208,
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
  "completed": 241970
}
```

After all the files are downloaded, the final responses are:

```json
{
    "status": "verifying sha256 digest"
}
{
    "status": "writing manifest"
}
{
    "status": "removing any unused layers"
}
{
    "status": "success"
}
```

if `stream` is set to false, then the response is a single JSON object:

```json
{
  "status": "success"
Matt Williams's avatar
Matt Williams committed
562
563
}
```
564

Matt Williams's avatar
Matt Williams committed
565
566
567
568
569
570
571
572
573
574
575
## Push a Model

```shell
POST /api/push
```

Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.

### Parameters

- `name`: name of the model to push in the form of `<namespace>/<model>:<tag>`
576
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
577
- `stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
Matt Williams's avatar
Matt Williams committed
578

579
580
581
### Examples

#### Request
Matt Williams's avatar
Matt Williams committed
582
583
584
585
586
587
588

```shell
curl -X POST http://localhost:11434/api/push -d '{
  "name": "mattw/pygmalion:latest"
}'
```

589
#### Response
590

591
If `stream` is not specified, or set to `true`, a stream of JSON objects is returned:
Matt Williams's avatar
Matt Williams committed
592
593

```json
594
{ "status": "retrieving manifest" }
595
```
Matt Williams's avatar
Matt Williams committed
596
597
598
599
600

and then:

```json
{
601
602
603
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
Matt Williams's avatar
Matt Williams committed
604
605
606
607
608
609
610
}
```

Then there is a series of uploading responses:

```json
{
611
612
613
614
  "status": "starting upload",
  "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
  "total": 1928429856
}
Matt Williams's avatar
Matt Williams committed
615
616
617
618
619
620
621
622
623
```

Finally, when the upload is complete:

```json
{"status":"pushing manifest"}
{"status":"success"}
```

624
625
626
If `stream` is set to `false`, then the response is a single JSON object:

```json
627
{ "status": "success" }
628
629
```

Matt Williams's avatar
Matt Williams committed
630
631
632
## Generate Embeddings

```shell
633
634
635
636
637
638
639
640
641
642
POST /api/embeddings
```

Generate embeddings from a model

### Parameters

- `model`: name of model to generate embeddings from
- `prompt`: text to generate embeddings for

643
644
645
646
Advanced parameters:

- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`

647
648
649
### Examples

#### Request
650

Matt Williams's avatar
Matt Williams committed
651
```shell
652
curl -X POST http://localhost:11434/api/embeddings -d '{
653
  "model": "llama2",
654
655
656
657
  "prompt": "Here is an article about llamas..."
}'
```

658
#### Response
659
660
661

```json
{
Alexander F. Rødseth's avatar
Alexander F. Rødseth committed
662
  "embedding": [
663
664
665
    0.5670403838157654, 0.009260174818336964, 0.23178744316101074, -0.2916173040866852, -0.8924556970596313,
    0.8785552978515625, -0.34576427936553955, 0.5742510557174683, -0.04222835972905159, -0.137906014919281
  ]
Costa Alexoglou's avatar
Costa Alexoglou committed
666
667
}
```