@@ -928,14 +928,25 @@ A single JSON object is returned:
...
@@ -928,14 +928,25 @@ A single JSON object is returned:
POST /api/create
POST /api/create
```
```
Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation must also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using [Create a Blob](#create-a-blob) and the value to the path indicated in the response.
Create a model from:
* another model;
* a safetensors directory; or
* a GGUF file.
If you are creating a model from a safetensors directory or from a GGUF file, you must [create a blob](#create-a-blob) for each of the files and then use the file name and SHA256 digest associated with each blob in the `files` field.
### Parameters
### Parameters
-`model`: name of the model to create
-`model`: name of the model to create
-`modelfile` (optional): contents of the Modelfile
-`from`: (optional) name of an existing model to create the new model from
-`files`: (optional) a dictionary of file names to SHA256 digests of blobs to create the model from
-`adapters`: (optional) a dictionary of file names to SHA256 digests of blobs for LORA adapters
-`template`: (optional) the prompt template for the model
-`license`: (optional) a string or list of strings containing the license or licenses for the model
-`system`: (optional) a string containing the system prompt for the model
-`parameters`: (optional) a dictionary of parameters for the model (see [Modelfile](./modelfile.md#valid-parameters-and-values) for a list of parameters)
-`messages`: (optional) a list of message objects used to create a conversation
-`stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
-`stream`: (optional) if `false` the response will be returned as a single response object, rather than a stream of objects
-`path` (optional): path to the Modelfile
-`quantize` (optional): quantize a non-quantized (e.g. float16) model
-`quantize` (optional): quantize a non-quantized (e.g. float16) model
#### Quantization types
#### Quantization types
...
@@ -961,14 +972,15 @@ Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `m
...
@@ -961,14 +972,15 @@ Create a model from a [`Modelfile`](./modelfile.md). It is recommended to set `m
#### Create a new model
#### Create a new model
Create a new model from a`Modelfile`.
Create a new model from an existing model.
##### Request
##### Request
```shell
```shell
curl http://localhost:11434/api/create -d'{
curl http://localhost:11434/api/create -d'{
"model": "mario",
"model": "mario",
"modelfile": "FROM llama3\nSYSTEM You are mario from Super Mario Bros."
"from": "llama3.2",
"system": "You are Mario from Super Mario Bros."
}'
}'
```
```
...
@@ -999,7 +1011,7 @@ Quantize a non-quantized model.
...
@@ -999,7 +1011,7 @@ Quantize a non-quantized model.
```shell
```shell
curl http://localhost:11434/api/create -d'{
curl http://localhost:11434/api/create -d'{
"model": "llama3.1:quantized",
"model": "llama3.1:quantized",
"modelfile": "FROM llama3.1:8b-instruct-fp16",
"from": "llama3.1:8b-instruct-fp16",
"quantize": "q4_K_M"
"quantize": "q4_K_M"
}'
}'
```
```
...
@@ -1019,52 +1031,112 @@ A stream of JSON objects is returned:
...
@@ -1019,52 +1031,112 @@ A stream of JSON objects is returned:
{"status":"success"}
{"status":"success"}
```
```
#### Create a model from GGUF
Create a model from a GGUF file. The `files` parameter should be filled out with the file name and SHA256 digest of the GGUF file you wish to use. Use [/api/blobs/:digest](#push-a-blob) to push the GGUF file to the server before calling this API.
The `files` parameter should include a dictionary of files for the safetensors model which includes the file names and SHA256 digest of each file. Use [/api/blobs/:digest](#push-a-blob) to first push each of the files to the server before calling this API. Files will remain in the cache until the Ollama server is restarted.