sampling_params.md 2.64 KB
Newer Older
1
2
3
4
5
6
## Sampling Parameters of SGLang Runtime
This doc describes the sampling parameters of the SGLang Runtime.

The `/generate` endpoint accepts the following arguments in the JSON format.

```python
7
@dataclass
8
class GenerateReqInput:
9
    # The input prompt
10
    text: Union[List[str], str]
11
    # The image input
12
    image_data: Optional[Union[List[str], str]] = None
13
    # The sampling_params
14
    sampling_params: Union[List[Dict], Dict] = None
15
    # The request id
16
    rid: Optional[Union[List[str], str]] = None
Liangsheng Yin's avatar
Liangsheng Yin committed
17
    # Whether to return logprobs
18
    return_logprob: Optional[Union[List[bool], bool]] = None
19
    # The start location of the prompt for return_logprob
20
    logprob_start_len: Optional[Union[List[int], int]] = None
Liangsheng Yin's avatar
Liangsheng Yin committed
21
22
23
24
    # The number of top logprobs to return
    top_logprobs_num: Optional[Union[List[int], int]] = None
    # Whether to detokenize tokens in logprobs
    return_text_in_logprobs: bool = False
25
    # Whether to stream output
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
    stream: bool = False
```

The `sampling_params` follows this format

```python
class SamplingParams:
    def __init__(
        self,
        max_new_tokens: int = 16,
        stop: Optional[Union[str, List[str]]] = None,
        temperature: float = 1.0,
        top_p: float = 1.0,
        top_k: int = -1,
        frequency_penalty: float = 0.0,
        presence_penalty: float = 0.0,
        ignore_eos: bool = False,
        skip_special_tokens: bool = True,
        dtype: Optional[str] = None,
        regex: Optional[str] = None,
    ) -> None:
```

## Examples

### Normal
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```

```python
import requests

response = requests.post(
    "http://localhost:30000/generate",
    json={
        "text": "The capital of France is",
        "sampling_params": {
            "temperature": 0,
            "max_new_tokens": 32,
        },
    },
)
print(response.json())
```

### Streaming

```python
import requests, json

response = requests.post(
    "http://localhost:30000/generate",
    json={
        "text": "The capital of France is",
        "sampling_params": {
            "temperature": 0,
            "max_new_tokens": 256,
        },
        "stream": True,
    },
    stream=True,
)

prev = 0
Lianmin Zheng's avatar
Lianmin Zheng committed
91
92
93
94
95
96
for chunk in response.iter_lines(decode_unicode=False):
    chunk = chunk.decode("utf-8")
    if chunk and chunk.startswith("data:"):
        if chunk == "data: [DONE]":
            break
        data = json.loads(chunk[5:].strip("\n"))
97
98
99
100
101
        output = data["text"].strip()
        print(output[prev:], end="", flush=True)
        prev = len(output)
print("")
```
102
103
104
105

### Multi modal

See [test_httpserver_llava.py](../test/srt/test_httpserver_llava.py).