This doc describes the sampling parameters of the SGLang Runtime.
It is the low-level endpoint of the runtime.
If you want a high-level endpoint that can automatically handle chat templates, consider using the [OpenAI Compatible API](../backend/openai_api_completions.ipynb).
The `/generate` endpoint accepts the following arguments in the JSON format.
```python
@dataclass
classGenerateReqInput:
# The input prompt. It can be a single prompt or a batch of prompts.
text:Optional[Union[List[str],str]]=None
# The token ids for text; one can specify either text or input_ids
If you want a high-level endpoint that can automatically handle chat templates, consider using the [OpenAI Compatible API](https://docs.sglang.ai/backend/openai_api_completions.html).
The `/generate` endpoint accepts the following parameters in JSON format. For in detail usage see the [native api doc](https://docs.sglang.ai/backend/native_api.html).
*`prompt`: The input prompt. Can be a single prompt or a batch of prompts.
*`input_ids`: Alternative to `text`. Specify the input as token IDs instead of text.
*`sampling_params`: The sampling parameters as described in the sections below.
*`return_logprob`: Whether to return log probabilities for tokens.
*`logprob_start_len`: If returning log probabilities, specifies the start position in the prompt. Default is "-1" which returns logprobs only for output tokens.
*`top_logprobs_num`: If returning log probabilities, specifies the number of top logprobs to return at each position.
*`stream`: Whether to stream the output.
*`lora_path`: Path to LoRA weights.
*`custom_logit_processor`: Custom logit processor for advanced sampling control. For usage see below.
"text":"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n"
"<|im_start|>user\n<image>\nDescribe this image in a very short sentence.<|im_end|>\n"
"<|im_start|>assistant\n",
"image_data":"example_image.png",
"sampling_params":{
"temperature":0,
"max_new_tokens":32,
},
},
)
print(response.json())
```
*`max_new_tokens`: The maximum output length measured in tokens.
*`stop`: One or multiple [stop words](https://developer.nvidia.com/blog/how-to-get-better-outputs-from-your-large-language-model/#let_the_model_know_when_to_stop). Generation will stop if one of these words is sampled.
*`stop_token_ids`: Provide stop words in form of token ids. Generation will stop if one of these token ids is sampled.
*`temperature`: [Temperature](https://developer.nvidia.com/blog/how-to-get-better-outputs-from-your-large-language-model/#predictability_vs_creativity) when sampling the next token. `temperature = 0` corresponds to greedy sampling, higher temperature leads to more diversity.
*`top_p`: [Top-p](https://developer.nvidia.com/blog/how-to-get-better-outputs-from-your-large-language-model/#predictability_vs_creativity) selects tokens from the smallest sorted set whose cumulative probability exceeds `top_p`. When `top_p = 1`, this reduces to unrestricted sampling from all tokens.
*`top_k`: [Top-k](https://developer.nvidia.com/blog/how-to-get-better-outputs-from-your-large-language-model/#predictability_vs_creativity) randomly selects from the `k` highest-probability tokens.
*`min_p`: [Min-p](https://github.com/huggingface/transformers/issues/27670) samples from tokens with probability larger than `min_p * highest_token_probability`.
The `image_data` can be a file name, a URL, or a base64 encoded string. See also `python/sglang/srt/utils.py:load_image`.
Streaming is supported in a similar manner as [above](#streaming).
### Penalizers
### Structured Outputs (JSON, Regex, EBNF)
You can specify a JSON schema, regular expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified for a request.
To use penalizers you will need to `--disable-overlap`. Please note that this might degrade performance.
SGLang supports two grammar backends:
*`frequency_penalty`: Penalizes tokens based on their frequency in generation so far. Must be between `-2` and `2` where negative numbers encourage repeatment of tokens and positive number encourages sampling of new tokens. The scaling of penalization grows linearly with each appearance of a token.
*`presence_penalty`: Penalizes tokens if they appeared in the generation so far. Must be between `-2` and `2` where negative numbers encourage repeatment of tokens and positive number encourages sampling of new tokens. The scaling of the penalization is constant if a token occured.
*`repetition_penalty`: Penalizes tokens if they appeared in prompt or generation so far. Must be between `0` and `2` where numbers smaller than `1` encourage repeatment of tokens and numbers larger than `2` encourages sampling of new tokens. The penalization scales multiplicatively.
*`min_new_tokens`: Forces the model to generate at least `min_new_tokens` until a stop word or EOS token is sampled. Note that this might lead to unintended behavior for example if the distribution is highly skewed towards these tokens.
-[Outlines](https://github.com/dottxt-ai/outlines)(default): Supports JSON schema and regular expression constraints.
-[XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema, regular expression, and EBNF constraints.
- XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)
### Constrained decoding
Initialize the XGrammar backend using `--grammar-backend xgrammar` flag
Please refer to our dedicated guide on [constrained decoding](https://docs.sglang.ai/backend/structured_outputs.html#Native-API-and-SGLang-Runtime-(SRT)) for the following parameters.
```python
importjson
importrequests
*`json_schema`
*`regex`
*`ebnf`
json_schema=json.dumps({
"type":"object",
"properties":{
"name":{"type":"string","pattern":"^[\\w]+$"},
"population":{"type":"integer"},
},
"required":["name","population"],
})
### Other options
# JSON (works with both Outlines and XGrammar)
response=requests.post(
"http://localhost:30000/generate",
json={
"text":"Here is the information of the capital of France in the JSON format.\n",
"sampling_params":{
"temperature":0,
"max_new_tokens":64,
"json_schema":json_schema,
},
},
)
print(response.json())
*`n`: ?
*`spaces_between_special_tokens`: Whether or not to add spaces between special tokens during detokenization.
*`no_stop_trim`: Don't trim stop words or EOS token from the generated text.
*`ignore_eos`: Don't stop generation when EOS token is sampled.
*`skip_special_tokens`: Remove special tokens during decoding.
*`custom_params`: Used when employing `CustomLogitProcessor`. For usage see below.
# Regular expression (Outlines backend only)
response=requests.post(
"http://localhost:30000/generate",
json={
"text":"Paris is the capital of",
"sampling_params":{
"temperature":0,
"max_new_tokens":64,
"regex":"(France|England)",
},
},
)
print(response.json())
# EBNF (XGrammar backend only)
response=requests.post(
"http://localhost:30000/generate",
json={
"text":"Write a greeting.",
"sampling_params":{
"temperature":0,
"max_new_tokens":64,
"ebnf":'root ::= "Hello" | "Hi" | "Hey"',
},
},
)
print(response.json())
```
### Custom Logit Processor
Launch a server with `--enable-custom-logit-processor` flag on.