sampling_params.md 8.92 KB
Newer Older
Ying Sheng's avatar
Ying Sheng committed
1
# Sampling Parameters in SGLang Runtime
2
This doc describes the sampling parameters of the SGLang Runtime.
3
It is the low-level endpoint of the runtime.
4
If you want a high-level endpoint that can automatically handle chat templates, consider using the [OpenAI Compatible API](../backend/openai_api_completions.ipynb).
5
6
7
8

The `/generate` endpoint accepts the following arguments in the JSON format.

```python
9
@dataclass
10
class GenerateReqInput:
Ying Sheng's avatar
Ying Sheng committed
11
    # The input prompt. It can be a single prompt or a batch of prompts.
12
    text: Optional[Union[List[str], str]] = None
Rin Intachuen's avatar
Rin Intachuen committed
13
    # The token ids for text; one can specify either text or input_ids
14
    input_ids: Optional[Union[List[List[int]], List[int]]] = None
Rin Intachuen's avatar
Rin Intachuen committed
15
16
    # The embeddings for input_ids; one can specify either text or input_ids or input_embeds.
    input_embeds: Optional[Union[List[List[List[float]]], List[List[float]]]] = None
Ying Sheng's avatar
Ying Sheng committed
17
18
    # The image input. It can be a file name, a url, or base64 encoded string.
    # See also python/sglang/srt/utils.py:load_image.
19
    image_data: Optional[Union[List[str], str]] = None
20
    # The sampling_params. See descriptions below.
Rin Intachuen's avatar
Rin Intachuen committed
21
    sampling_params: Optional[Union[List[Dict], Dict]] = None
Ying Sheng's avatar
Ying Sheng committed
22
    # The request id.
23
    rid: Optional[Union[List[str], str]] = None
Ying Sheng's avatar
Ying Sheng committed
24
    # Whether to return logprobs.
25
    return_logprob: Optional[Union[List[bool], bool]] = None
Rin Intachuen's avatar
Rin Intachuen committed
26
    # If return logprobs, the start location in the prompt for returning logprobs.
27
    # By default, this value is "-1", which means it will only return logprobs for output tokens.
28
    logprob_start_len: Optional[Union[List[int], int]] = None
Rin Intachuen's avatar
Rin Intachuen committed
29
    # If return logprobs, the number of top logprobs to return at each position.
Liangsheng Yin's avatar
Liangsheng Yin committed
30
    top_logprobs_num: Optional[Union[List[int], int]] = None
31
    # Whether to detokenize tokens in text in the returned logprobs.
Liangsheng Yin's avatar
Liangsheng Yin committed
32
    return_text_in_logprobs: bool = False
Ying Sheng's avatar
Ying Sheng committed
33
    # Whether to stream output.
34
35
36
37
38
39
    stream: bool = False
```

The `sampling_params` follows this format

```python
40
# The maximum number of output tokens
41
max_new_tokens: int = 128,
42
43
# Stop when hitting any of the strings in this list.
stop: Optional[Union[str, List[str]]] = None,
44
45
46
# Stop when hitting any of the token_ids in this list. Could be useful when mixed with
# `min_new_tokens`.
stop_token_ids: Optional[List[int]] = [],
47
48
49
50
51
52
# Sampling temperature
temperature: float = 1.0,
# Top-p sampling
top_p: float = 1.0,
# Top-k sampling
top_k: int = -1,
intervitens's avatar
intervitens committed
53
54
# Min-p sampling
min_p: float = 0.0,
55
56
57
58
59
60
61
62
# Whether to ignore EOS token.
ignore_eos: bool = False,
# Whether to skip the special tokens during detokenization.
skip_special_tokens: bool = True,
# Whether to add spaces between special tokens during detokenization.
spaces_between_special_tokens: bool = True,
# Do parallel sampling and return `n` outputs.
n: int = 1,
63
64
65
66
67
68

## Structured Outputs
# Only one of the below three can be set at a time:

# Constrains the output to follow a given regular expression.
regex: Optional[str] = None,
69
70
# Constrains the output to follow a given JSON schema.
json_schema: Optional[str] = None,
71
72
# Constrains the output to follow a given EBNF Grammar.
ebnf: Optional[str] = None,
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93

## Penalties. See [Performance Implications on Penalties] section below for more informations.

# Float that penalizes new tokens based on their frequency in the generated text so far.
# Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to
# repeat tokens. Must be -2 <= value <= 2. Setting to 0 (default) will disable this penalty.
frequency_penalty: float = 0.0,
# Float that penalizes new tokens based on whether they appear in the generated text so far.
# Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat
# tokens. Must be -2 <= value <= 2. Setting to 0 (default) will disable this penalty.
presence_penalty: float = 0.0,
# Float that penalizes new tokens based on whether they appear in the prompt and the generated text
# so far. Values > 1 encourage the model to use new tokens, while values < 1 encourage the model to
# repeat tokens. Must be 0 <= value <= 2. Setting to 1 (default) will disable this penalty.
repetition_penalty: float = 1.0,
# Guides inference to generate at least this number of tokens by penalizing logits of tokenizer's
# EOS token and `stop_token_ids` to -inf, until the output token reaches given length.
# Note that any of the `stop` string can be generated before reaching `min_new_tokens`, as it is
# difficult to infer the correct token ID by given `stop` strings.
# Must be 0 <= value < max_new_tokens. Setting to 0 (default) will disable this penalty.
min_new_tokens: int = 0,
94
95
96
97
98
```

## Examples

### Normal
Ying Sheng's avatar
Ying Sheng committed
99
Launch a server
100
```
Ying Sheng's avatar
Ying Sheng committed
101
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
102
103
```

Ying Sheng's avatar
Ying Sheng committed
104
Send a request
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
```python
import requests

response = requests.post(
    "http://localhost:30000/generate",
    json={
        "text": "The capital of France is",
        "sampling_params": {
            "temperature": 0,
            "max_new_tokens": 32,
        },
    },
)
print(response.json())
```

### Streaming
Ying Sheng's avatar
Ying Sheng committed
122
Send a request and stream the output
123
124
125
126
127
128
129
130
131
```python
import requests, json

response = requests.post(
    "http://localhost:30000/generate",
    json={
        "text": "The capital of France is",
        "sampling_params": {
            "temperature": 0,
132
            "max_new_tokens": 32,
133
134
135
136
137
138
139
        },
        "stream": True,
    },
    stream=True,
)

prev = 0
Lianmin Zheng's avatar
Lianmin Zheng committed
140
141
142
143
144
145
for chunk in response.iter_lines(decode_unicode=False):
    chunk = chunk.decode("utf-8")
    if chunk and chunk.startswith("data:"):
        if chunk == "data: [DONE]":
            break
        data = json.loads(chunk[5:].strip("\n"))
146
147
148
149
150
        output = data["text"].strip()
        print(output[prev:], end="", flush=True)
        prev = len(output)
print("")
```
151
152
153

### Multi modal

Ying Sheng's avatar
Ying Sheng committed
154
155
Launch a server
```
156
python3 -m sglang.launch_server --model-path lmms-lab/llava-onevision-qwen2-7b-ov --chat-template chatml-llava
Ying Sheng's avatar
Ying Sheng committed
157
158
159
160
161
162
163
```

Download an image
```
curl -o example_image.png -L https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true
```

Ying Sheng's avatar
Ying Sheng committed
164
Send a request
Ying Sheng's avatar
Ying Sheng committed
165
166
167
168
169
170
```python
import requests

response = requests.post(
    "http://localhost:30000/generate",
    json={
171
172
173
        "text": "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n"
                "<|im_start|>user\n<image>\nDescribe this image in a very short sentence.<|im_end|>\n"
                "<|im_start|>assistant\n",
Ying Sheng's avatar
Ying Sheng committed
174
175
176
177
178
179
180
181
182
183
184
        "image_data": "example_image.png",
        "sampling_params": {
            "temperature": 0,
            "max_new_tokens": 32,
        },
    },
)
print(response.json())
```

The `image_data` can be a file name, a URL, or a base64 encoded string. See also `python/sglang/srt/utils.py:load_image`.
Ying Sheng's avatar
Ying Sheng committed
185
Streaming is supported in a similar manner as [above](#streaming).
Lianmin Zheng's avatar
Lianmin Zheng committed
186

187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
### Structured Outputs (JSON, Regex, EBNF)
You can specify a JSON schema, Regular Expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints.

SGLang supports two grammar backends:

- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and Regular Expression constraints.
- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.
  - XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)

> 🔔 Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified at a time.

Initialise xgrammar backend using `--grammar-backend xgrammar` flag
```bash
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \
--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)
```
Lianmin Zheng's avatar
Lianmin Zheng committed
203
204
205
206
207

```python
import json
import requests

208
209
210
211
212
213
214
215
json_schema = json.dumps({
    "type": "object",
    "properties": {
        "name": {"type": "string", "pattern": "^[\\w]+$"},
        "population": {"type": "integer"},
    },
    "required": ["name", "population"],
})
Lianmin Zheng's avatar
Lianmin Zheng committed
216

217
# JSON (works with both Outlines and XGrammar)
Lianmin Zheng's avatar
Lianmin Zheng committed
218
219
220
221
222
223
224
225
226
227
228
229
230
response = requests.post(
    "http://localhost:30000/generate",
    json={
        "text": "Here is the information of the capital of France in the JSON format.\n",
        "sampling_params": {
            "temperature": 0,
            "max_new_tokens": 64,
            "json_schema": json_schema,
        },
    },
)
print(response.json())

231
# Regular expression (Outlines backend only)
Lianmin Zheng's avatar
Lianmin Zheng committed
232
233
234
235
236
237
238
239
240
241
242
243
response = requests.post(
    "http://localhost:30000/generate",
    json={
        "text": "Paris is the capital of",
        "sampling_params": {
            "temperature": 0,
            "max_new_tokens": 64,
            "regex": "(France|England)",
        },
    },
)
print(response.json())
244
245
246
247
248
249
250
251
252
253
254
255
256
257

# EBNF (XGrammar backend only)
response = requests.post(
    "http://localhost:30000/generate",
    json={
        "text": "Write a greeting.",
        "sampling_params": {
            "temperature": 0,
            "max_new_tokens": 64,
            "ebnf": 'root ::= "Hello" | "Hi" | "Hey"',
        },
    },
)
print(response.json())
258
```