The model name format today is`model:tag`. Some examples are `orca:3b-q4_1` and `llama2:70b`. The tag is optional and if not provided will default to `latest`. The tag is used to identify a specific version.
Model names follow a`model:tag` format. Some examples are `orca:3b-q4_1` and `llama2:70b`. The tag is optional and if not provided will default to `latest`. The tag is used to identify a specific version.
### Durations
### Durations
All durations are in nanoseconds.
All durations are returned in nanoseconds.
## Generate a Prompt
## Generate a completion
**POST /api/generate**
```
POST /api/generate
```
### Description
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so will be a series of responses. The final response object will include statistics and additional data from the request.
**Generate** is the main endpoint that you will use when working with Ollama. This is used to generate a response to a prompt sent to a model. This is a streaming endpoint, so will be a series of responses. The final response will include the context and what is usually seen in the output from verbose mode.
### Parameters
### Request
-`model`: (required) the [model name](#model-names)
-`prompt`: the prompt to generate a response for
The **Generate** endpoint takes a JSON object with the following fields:
Advanced parameters:
```JSON
-`options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
{
-`system`: system prompt to (overrides what is defined in the `Modelfile`)
"model": "site/namespace/model:tag",
-`template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
"prompt": "You are a software engineer working on building docs for Ollama.",
"options": {
### Request
"temperature": 0.7,
}
}
```
**Options** can include any of the parameters listed in the [Modelfile](./modelfile.mdvalid-parameters-and-values) documentation. The only required parameter is **model**. If no **prompt** is provided, the model will generate a response to an empty prompt. If no **options** are provided, the model will use the default options from the Modelfile of the parent model.
```
curl -X POST http://localhost:11434/api/generate -d '{
"model": "llama2:7b",
"prompt": "Why is the sky blue?"
}'
```
### Response
### Response
The response is a stream of JSON objects with the following fields:
Copy a model. Creates a model with another name from an existing model.
**POST /api/copy**
### Description
**Copy** will copy a model from one name to another. This is useful for creating a new model from an existing model. It is often used as the first step to renaming a model.
### Request
### Request
The **Copy** endpoint takes a JSON object with the following fields:
```JSON
{
"source": "modelname",
"destination": "newmodelname"
}
```
```
curl http://localhost:11434/api/copy -d '{
### Response
"source": "llama2:7b",
"destination": "llama2-backup"
There is no response other than a 200 status code.
### Example
#### Request
```shell
curl -X POST 'http://localhost:11434/api/copy'-d\
'{
"source": "MyCoolModel",
"destination": "ADifferentModel"
}'
}'
```
```
#### Response
No response is returned other than a 200 status code.
## Delete a Model
## Delete a Model
**DELETE /api/delete**
```
DELETE /api/delete
### Description
**Delete** will delete a model from the local machine. This is useful for cleaning up models that are no longer needed.
### Request
The **Delete** endpoint takes a JSON object with a single key/value pair for modelname. For example:
```JSON
{
"model": "modelname"
}
```
```
### Response
Delete a model and its data.
No response is returned other than a 200 status code.
No response is returned other than a 200 status code.
## Pull a Model
## Pull a Model
**POST /api/pull**
```
POST /api/pull
```
Download a model from a the model registry. Cancelled pulls are resumed from where they left off, and multiple calls to will share the same download progress.
### Description
### Parameters
**Pull** will pull a model from a remote registry. This is useful for getting a model from the Ollama registry and in the future from alternate registries.
-`name`: name of the model to pull
### Request
### Request
The **Pull** endpoint takes a JSON object with the following fields:
```
curl -X POST http://localhost:11434/api/pull -d '{
```JSON
"name": "llama2:7b"
{
}'
"name": "modelname"
}
```
```
### Response
### Response
The response is a stream of JSON objects with the following format: