-`format`: the format to return a response in. Currently the only accepted value is `json`
-`format`: the format to return a response in. Currently the only accepted value is `json`
-`options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
-`options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
-`system`: system prompt to (overrides what is defined in the `Modelfile`)
-`system`: system message to (overrides what is defined in the `Modelfile`)
-`template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
-`template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
-`context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
-`context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
-`stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
-`stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
...
@@ -548,7 +548,7 @@ A single JSON object will be returned.
...
@@ -548,7 +548,7 @@ A single JSON object will be returned.
POST /api/show
POST /api/show
```
```
Show details about a model including modelfile, template, parameters, license, and system prompt.
Show details about a model including modelfile, template, parameters, license, and system message.
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | top_k 40 |
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | top_k 40 |
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | top_p 0.9 |
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | top_p 0.9 |
### TEMPLATE
### TEMPLATE
`TEMPLATE` of the full prompt template to be passed into the model. It may include (optionally) a system prompt and a user's prompt. This is used to create a full custom prompt, and syntax may be model specific. You can usually find the template for a given model in the readme for that model.
`TEMPLATE` of the full prompt template to be passed into the model. It may include (optionally) a system message and a user's prompt. This is used to create a full custom prompt, and syntax may be model specific. You can usually find the template for a given model in the readme for that model.