@@ -45,7 +45,7 @@ Generate a response for a given prompt with a provided model. This is a streamin
Advanced parameters (optional):
-`format`: the format to return a response in. Currently the only accepted value is `json`
-`format`: the format to return a response in. Format can be `json` or a JSON schema
-`options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
-`system`: system message to (overrides what is defined in the `Modelfile`)
-`template`: the prompt template to use (overrides what is defined in the `Modelfile`)
-`keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
-`context` (deprecated): the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
#### Structured outputs
Structured outputs are supported by providing a JSON schema in the `format` parameter. The model will generate a response that matches the schema. See the [structured outputs](#request-structured-outputs) example below.
#### JSON mode
Enable JSON mode by setting the `format` parameter to `json`. This will structure the response as a valid JSON object. See the JSON mode [example](#request-json-mode) below.
@@ -456,11 +506,15 @@ The `message` object has the following fields:
Advanced parameters (optional):
-`format`: the format to return a response in. Currently the only accepted value is `json`
-`format`: the format to return a response in. Format can be `json` or a JSON schema.
-`options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
-`stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
-`keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)
### Structured outputs
Structured outputs are supported by providing a JSON schema in the `format` parameter. The model will generate a response that matches the schema. See the [Chat request (Structured outputs)](#chat-request-structured-outputs) example below.
Send a chat message with a conversation history. You can use this same approach to start the conversation using multi-shot or chain-of-thought prompting.
{"role":"user","content":"I have two friends. The first is Ollama 22 years old busy saving the world, and the second is Alonso 23 years old and wants to hang out. Return a list of friends in JSON format"}