README.md 17.6 KB
Newer Older
Lianmin Zheng's avatar
Lianmin Zheng committed
1
<div align="center">
2
<img src="https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png" alt="logo" width="400"></img>
Lianmin Zheng's avatar
Lianmin Zheng committed
3
4
</div>

Yineng Zhang's avatar
Yineng Zhang committed
5
6
7
8
9
10
[![PyPI](https://img.shields.io/pypi/v/sglang)](https://pypi.org/project/sglang)
![PyPI - Downloads](https://img.shields.io/pypi/dm/sglang)
[![license](https://img.shields.io/github/license/sgl-project/sglang.svg)](https://github.com/sgl-project/sglang/tree/main/LICENSE)
[![issue resolution](https://img.shields.io/github/issues-closed-raw/sgl-project/sglang)](https://github.com/sgl-project/sglang/issues)
[![open issues](https://img.shields.io/github/issues-raw/sgl-project/sglang)](https://github.com/sgl-project/sglang/issues)

Lianmin Zheng's avatar
Lianmin Zheng committed
11
12
--------------------------------------------------------------------------------

Ying Sheng's avatar
Ying Sheng committed
13
| [**Documentation**](https://sglang.readthedocs.io) | [**Blog**](https://lmsys.org/blog/2024-07-25-sglang-llama3/) | [**Paper**](https://arxiv.org/abs/2312.07104) | [**Slack**](https://join.slack.com/t/sgl-fru7574/shared_invite/zt-2ngly9muu-t37XiH87qvD~6rVBTkTEHw) |
Lianmin Zheng's avatar
Lianmin Zheng committed
14

Ying Sheng's avatar
Ying Sheng committed
15
16
SGLang is a fast serving framework for large language models and vision language models.
It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language.
Lianmin Zheng's avatar
Lianmin Zheng committed
17

18
The core features include:
Ying Sheng's avatar
Ying Sheng committed
19
- **Fast Backend Runtime**: Efficient serving with RadixAttention for prefix caching, jump-forward constrained decoding, continuous batching, token attention (paged attention), tensor parallelism, flashinfer kernels, and quantization (AWQ/FP8/GPTQ/Marlin).
Lianmin Zheng's avatar
Lianmin Zheng committed
20
- **Flexible Frontend Language**: Enables easy programming of LLM applications with chained generation calls, advanced prompting, control flow, multiple modalities, parallelism, and external interactions.
Lianmin Zheng's avatar
Lianmin Zheng committed
21

Ying Sheng's avatar
Ying Sheng committed
22
## News
Ying Sheng's avatar
Ying Sheng committed
23
24
25
- [2024/07] 🔥 Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) ([blog](https://lmsys.org/blog/2024-07-25-sglang-llama3/)).
- [2024/04] SGLang is used by the official **LLaVA-NeXT (video)** release ([blog](https://llava-vl.github.io/blog/2024-04-30-llava-next-video/)).
- [2024/02] SGLang enables **3x faster JSON decoding** with compressed finite state machine ([blog](https://lmsys.org/blog/2024-02-05-compressed-fsm/)).
Ying Sheng's avatar
Ying Sheng committed
26

Ying Sheng's avatar
Ying Sheng committed
27
28
29
<details>
<summary>More</summary>

Ying Sheng's avatar
Ying Sheng committed
30
- [2024/01] SGLang provides up to **5x faster inference** with RadixAttention ([blog](https://lmsys.org/blog/2024-01-17-sglang/)).
Ying Sheng's avatar
Ying Sheng committed
31
32
33
34
- [2024/01] SGLang powers the serving of the official **LLaVA v1.6** release demo ([usage](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#demo)).

</details>

Lianmin Zheng's avatar
Lianmin Zheng committed
35
36
37
## Contents
- [Install](#install)
- [Backend: SGLang Runtime (SRT)](#backend-sglang-runtime-srt)
Ying Sheng's avatar
Ying Sheng committed
38
- [Frontend: Structured Generation Language (SGLang)](#frontend-structured-generation-language-sglang)
Lianmin Zheng's avatar
Lianmin Zheng committed
39
40
41
42
43
44
- [Benchmark And Performance](#benchmark-and-performance)
- [Roadmap](#roadmap)
- [Citation And Acknowledgment](#citation-and-acknowledgment)

## Install

Lianmin Zheng's avatar
Lianmin Zheng committed
45
46
### Method 1: With pip
```
47
pip install --upgrade pip
Lianmin Zheng's avatar
Lianmin Zheng committed
48
pip install "sglang[all]"
Lianmin Zheng's avatar
Lianmin Zheng committed
49

Lianmin Zheng's avatar
Lianmin Zheng committed
50
51
52
# Install FlashInfer CUDA kernels
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.3/
```
53

Lianmin Zheng's avatar
Lianmin Zheng committed
54
### Method 2: From source
Lianmin Zheng's avatar
Lianmin Zheng committed
55
```
56
git clone https://github.com/sgl-project/sglang.git
Lianmin Zheng's avatar
Lianmin Zheng committed
57
58
cd sglang

Lianmin Zheng's avatar
Lianmin Zheng committed
59
pip install --upgrade pip
Lianmin Zheng's avatar
Lianmin Zheng committed
60
61
pip install -e "python[all]"

Lianmin Zheng's avatar
Lianmin Zheng committed
62
63
64
# Install FlashInfer CUDA kernels
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.3/
```
65

Lianmin Zheng's avatar
Lianmin Zheng committed
66
### Method 3: Using docker
Ying Sheng's avatar
Ying Sheng committed
67
The docker images are available on Docker Hub as [lmsysorg/sglang](https://hub.docker.com/r/lmsysorg/sglang/tags), built from [Dockerfile](docker).
Ying Sheng's avatar
Ying Sheng committed
68
Repalce `<secret>` below with your huggingface hub [token](https://huggingface.co/docs/hub/en/security-tokens).
Ying Sheng's avatar
Ying Sheng committed
69

Liangsheng Yin's avatar
Liangsheng Yin committed
70
71
72
73
```bash
docker run --gpus all \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
74
    --env "HF_TOKEN=<secret>" \
Liangsheng Yin's avatar
Liangsheng Yin committed
75
76
    --ipc=host \
    lmsysorg/sglang:latest \
Ying Sheng's avatar
Ying Sheng committed
77
    python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --host 0.0.0.0 --port 30000
Liangsheng Yin's avatar
Liangsheng Yin committed
78
79
```

Lianmin Zheng's avatar
Lianmin Zheng committed
80
### Common Notes
Lianmin Zheng's avatar
Lianmin Zheng committed
81
82
- If you cannot install FlashInfer, check out its [installation](https://docs.flashinfer.ai/installation.html#) page. If you still cannot install it, you can use the slower Triton kernels by adding `--disable-flashinfer` when launching the server.
- If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.
Ying Sheng's avatar
Ying Sheng committed
83

Ying Sheng's avatar
Ying Sheng committed
84
85
86
## Backend: SGLang Runtime (SRT)
The SGLang Runtime (SRT) is an efficient serving engine.

Ying Sheng's avatar
Ying Sheng committed
87
### Quick Start
Ying Sheng's avatar
Ying Sheng committed
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
Launch a server
```
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
```

Send a request
```
curl http://localhost:30000/generate \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Once upon a time,",
    "sampling_params": {
      "max_new_tokens": 16,
      "temperature": 0
    }
  }'
```
105
Learn more about the argument format [here](docs/en/sampling_params.md).
Ying Sheng's avatar
Ying Sheng committed
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136

### OpenAI Compatible API
In addition, the server supports OpenAI-compatible APIs.

```python
import openai
client = openai.Client(
    base_url="http://127.0.0.1:30000/v1", api_key="EMPTY")

# Text completion
response = client.completions.create(
	model="default",
	prompt="The capital of France is",
	temperature=0,
	max_tokens=32,
)
print(response)

# Chat completion
response = client.chat.completions.create(
    model="default",
    messages=[
        {"role": "system", "content": "You are a helpful AI assistant"},
        {"role": "user", "content": "List 3 countries and their capitals."},
    ],
    temperature=0,
    max_tokens=64,
)
print(response)
```

Ying Sheng's avatar
Ying Sheng committed
137
It supports streaming, vision, and most features of the Chat/Completions/Models endpoints specified by the [OpenAI API Reference](https://platform.openai.com/docs/api-reference/).
Ying Sheng's avatar
Ying Sheng committed
138
139
140
141
142
143
144
145
146
147
148
149
150
151

### Additional Server Arguments
- Add `--tp 2` to enable tensor parallelism. If it indicates `peer access is not supported between these two devices`, add `--enable-p2p-check` option.
```
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000 --tp 2
```
- Add `--dp 2` to enable data parallelism. It can also be used together with tp. Data parallelism is better for throughput if there is enough memory.
```
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000 --dp 2 --tp 2
```
- If you see out-of-memory errors during serving, please try to reduce the memory usage of the KV cache pool by setting a smaller value of `--mem-fraction-static`. The default value is `0.9`
```
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000 --mem-fraction-static 0.7
```
152
- See [hyperparameter_tuning.md](docs/en/hyperparameter_tuning.md) on tuning hyperparameters for better performance.
153
- Add `--nnodes 2` to run tensor parallelism on multiple nodes. If you have two nodes with two GPUs on each node and want to run TP=4, let `sgl-dev-0` be the hostname of the first node and `50000` be an available port.
Ying Sheng's avatar
Ying Sheng committed
154
155
```
# Node 0
156
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 4 --nccl-init sgl-dev-0:50000 --nnodes 2 --node-rank 0
Ying Sheng's avatar
Ying Sheng committed
157
158

# Node 1
159
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 4 --nccl-init sgl-dev-0:50000 --nnodes 2 --node-rank 1
Ying Sheng's avatar
Ying Sheng committed
160
```
161
- If the model does not have a template in the Hugging Face tokenizer, you can specify a [custom chat template](docs/en/custom_chat_template.md).
162
- To enable fp8 quantization, you can add `--quantization fp8` on a fp16 checkpoint or directly load a fp8 checkpoint without specifying any arguments.
163
- To enable experimental torch.compile support, you can add `--enable-torch-compile`. It accelerates small models on small batch sizes.
Ying Sheng's avatar
Ying Sheng committed
164

165
166
167
### Run Llama 3.1 405B

```bash
Ying Sheng's avatar
Ying Sheng committed
168
169
170
171
## Run 405B (fp8) on a single node
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct-FP8 --tp 8

## Run 405B (fp16) on two nodes
Yineng Zhang's avatar
Yineng Zhang committed
172
# replace the `172.16.4.52:20000` with your own first node ip address and port, disable CUDA Graph temporarily
Ying Sheng's avatar
Ying Sheng committed
173

174
175
176
177
178
179
180
# on the first node
GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 0 --disable-cuda-graph --mem-frac 0.75

# on the second
GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 1 --disable-cuda-graph --mem-frac 0.75
```

Ying Sheng's avatar
Ying Sheng committed
181
182
### Supported Models

183
- Llama / Llama 2 / Llama 3 / Llama 3.1
Ying Sheng's avatar
Ying Sheng committed
184
185
186
- Mistral / Mixtral
- Gemma / Gemma 2
- Qwen / Qwen 2 / Qwen 2 MoE
187
- DeepSeek / DeepSeek 2
Ying Sheng's avatar
Ying Sheng committed
188
- LLaVA 1.5 / 1.6
Ying Sheng's avatar
Ying Sheng committed
189
190
191
  - `python -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`
  - `python -m sglang.launch_server --model-path liuhaotian/llava-v1.6-vicuna-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000`
  - `python -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 30000`
Ying Sheng's avatar
Ying Sheng committed
192
- LLaVA-NeXT-Video
Ying Sheng's avatar
Ying Sheng committed
193
  - see [examples/usage/llava_video](examples/usage/llava_video)
Ying Sheng's avatar
Ying Sheng committed
194
195
196
197
198
199
200
201
- Yi-VL
  - see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py).
- StableLM
- Command-R
- DBRX
- Grok
- ChatGLM
- InternLM 2
zhyncs's avatar
zhyncs committed
202
- Mistral NeMo
Ying Sheng's avatar
Ying Sheng committed
203

204
Instructions for supporting a new model are [here](https://github.com/sgl-project/sglang/blob/main/docs/en/model_support.md).
Ying Sheng's avatar
Ying Sheng committed
205

Ying Sheng's avatar
Ying Sheng committed
206
207
### Benchmark Performance

Ying Sheng's avatar
Ying Sheng committed
208
- Benchmark a single static batch by running the following command without launching a server. The arguments are the same as those for `launch_server.py`. This is not a dynamic batching server, so it may run out of memory for a batch size that can run successfully with a real server. This is because a real server will truncate the prefill into several batches/chunks, while this unit test does not do this.
Ying Sheng's avatar
Ying Sheng committed
209
210
211
212
213
214
215
216
  ```
  python -m sglang.bench_latency --model-path meta-llama/Meta-Llama-3-8B-Instruct --batch 32 --input-len 256 --output-len 32
  ```
- Benchmark online serving. Launch a server first and run the following command.
  ```
  python3 -m sglang.bench_serving --backend sglang --num-prompt 10
  ```

Ying Sheng's avatar
Ying Sheng committed
217
218
219
220
## Frontend: Structured Generation Language (SGLang)
The frontend language can be used with local models or API models.

### Quick Start
Lianmin Zheng's avatar
Lianmin Zheng committed
221
222
The example below shows how to use sglang to answer a mulit-turn question.

Ying Sheng's avatar
Ying Sheng committed
223
#### Using Local Models
224
First, launch a server with
Lianmin Zheng's avatar
Lianmin Zheng committed
225
```
Ying Sheng's avatar
Ying Sheng committed
226
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
Lianmin Zheng's avatar
Lianmin Zheng committed
227
228
```

229
230
Then, connect to the server and answer a multi-turn question.

Lianmin Zheng's avatar
Lianmin Zheng committed
231
```python
232
from sglang import function, system, user, assistant, gen, set_default_backend, RuntimeEndpoint
Lianmin Zheng's avatar
Lianmin Zheng committed
233
234
235
236
237
238
239
240
241

@function
def multi_turn_question(s, question_1, question_2):
    s += system("You are a helpful assistant.")
    s += user(question_1)
    s += assistant(gen("answer_1", max_tokens=256))
    s += user(question_2)
    s += assistant(gen("answer_2", max_tokens=256))

242
set_default_backend(RuntimeEndpoint("http://localhost:30000"))
Lianmin Zheng's avatar
Lianmin Zheng committed
243
244
245
246
247
248
249
250

state = multi_turn_question.run(
    question_1="What is the capital of the United States?",
    question_2="List two local attractions.",
)

for m in state.messages():
    print(m["role"], ":", m["content"])
251
252

print(state["answer_1"])
Lianmin Zheng's avatar
Lianmin Zheng committed
253
254
```

Ying Sheng's avatar
Ying Sheng committed
255
#### Using OpenAI Models
256
Set the OpenAI API Key
Lianmin Zheng's avatar
Lianmin Zheng committed
257
```
258
export OPENAI_API_KEY=sk-******
Lianmin Zheng's avatar
Lianmin Zheng committed
259
260
```

261
Then, answer a multi-turn question.
Lianmin Zheng's avatar
Lianmin Zheng committed
262
```python
263
from sglang import function, system, user, assistant, gen, set_default_backend, OpenAI
Lianmin Zheng's avatar
Lianmin Zheng committed
264
265
266
267
268
269
270
271
272

@function
def multi_turn_question(s, question_1, question_2):
    s += system("You are a helpful assistant.")
    s += user(question_1)
    s += assistant(gen("answer_1", max_tokens=256))
    s += user(question_2)
    s += assistant(gen("answer_2", max_tokens=256))

273
set_default_backend(OpenAI("gpt-3.5-turbo"))
Lianmin Zheng's avatar
Lianmin Zheng committed
274
275
276
277
278
279
280
281

state = multi_turn_question.run(
    question_1="What is the capital of the United States?",
    question_2="List two local attractions.",
)

for m in state.messages():
    print(m["role"], ":", m["content"])
282
283

print(state["answer_1"])
Lianmin Zheng's avatar
Lianmin Zheng committed
284
285
```

Ying Sheng's avatar
Ying Sheng committed
286
#### More Examples
Lianmin Zheng's avatar
Lianmin Zheng committed
287

288
Anthropic and VertexAI (Gemini) models are also supported.
Lianmin Zheng's avatar
Lianmin Zheng committed
289
290
You can find more examples at [examples/quick_start](examples/quick_start).

Ying Sheng's avatar
Ying Sheng committed
291
### Language Feature
Lianmin Zheng's avatar
Lianmin Zheng committed
292
293
294
295
296
To begin with, import sglang.
```python
import sglang as sgl
```

Lianmin Zheng's avatar
Lianmin Zheng committed
297
`sglang` provides some simple primitives such as `gen`, `select`, `fork`, `image`.
Lianmin Zheng's avatar
Lianmin Zheng committed
298
299
You can implement your prompt flow in a function decorated by `sgl.function`.
You can then invoke the function with `run` or `run_batch`.
300
The system will manage the state, chat template, parallelism and batching for you.
Lianmin Zheng's avatar
Lianmin Zheng committed
301

302
303
The complete code for the examples below can be found at [readme_examples.py](examples/usage/readme_examples.py)

Ying Sheng's avatar
Ying Sheng committed
304
#### Control Flow
Lianmin Zheng's avatar
Lianmin Zheng committed
305
306
You can use any Python code within the function body, including control flow, nested function calls, and external libraries.

Lianmin Zheng's avatar
Lianmin Zheng committed
307
308
```python
@sgl.function
309
310
311
def tool_use(s, question):
    s += "To answer this question: " + question + ". "
    s += "I need to use a " + sgl.gen("tool", choices=["calculator", "search engine"]) + ". "
Lianmin Zheng's avatar
Lianmin Zheng committed
312
313
314

    if s["tool"] == "calculator":
        s += "The math expression is" + sgl.gen("expression")
315
316
    elif s["tool"] == "search engine":
        s += "The key word to search is" + sgl.gen("word")
Lianmin Zheng's avatar
Lianmin Zheng committed
317
```
Lianmin Zheng's avatar
Lianmin Zheng committed
318

Ying Sheng's avatar
Ying Sheng committed
319
#### Parallelism
Lianmin Zheng's avatar
Lianmin Zheng committed
320
321
322
Use `fork` to launch parallel prompts.
Because `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel.

Lianmin Zheng's avatar
Lianmin Zheng committed
323
324
325
326
327
328
329
330
```python
@sgl.function
def tip_suggestion(s):
    s += (
        "Here are two tips for staying healthy: "
        "1. Balanced Diet. 2. Regular Exercise.\n\n"
    )

Lianmin Zheng's avatar
Lianmin Zheng committed
331
    forks = s.fork(2)
Lianmin Zheng's avatar
Lianmin Zheng committed
332
333
334
335
336
337
338
339
    for i, f in enumerate(forks):
        f += f"Now, expand tip {i+1} into a paragraph:\n"
        f += sgl.gen(f"detailed_tip", max_tokens=256, stop="\n\n")

    s += "Tip 1:" + forks[0]["detailed_tip"] + "\n"
    s += "Tip 2:" + forks[1]["detailed_tip"] + "\n"
    s += "In summary" + sgl.gen("summary")
```
Lianmin Zheng's avatar
Lianmin Zheng committed
340

Ying Sheng's avatar
Ying Sheng committed
341
#### Multi Modality
Lianmin Zheng's avatar
Lianmin Zheng committed
342
343
Use `sgl.image` to pass an image as input.

Lianmin Zheng's avatar
Lianmin Zheng committed
344
345
```python
@sgl.function
Lianmin Zheng's avatar
Lianmin Zheng committed
346
def image_qa(s, image_file, question):
Lianmin Zheng's avatar
Lianmin Zheng committed
347
    s += sgl.user(sgl.image(image_file) + question)
Lianmin Zheng's avatar
Lianmin Zheng committed
348
    s += sgl.assistant(sgl.gen("answer", max_tokens=256)
Lianmin Zheng's avatar
Lianmin Zheng committed
349
350
```

351
352
See also [srt_example_llava.py](examples/quick_start/srt_example_llava.py).

Ying Sheng's avatar
Ying Sheng committed
353
#### Constrained Decoding
354
355
Use `regex` to specify a regular expression as a decoding constraint.
This is only supported for local models.
Lianmin Zheng's avatar
Lianmin Zheng committed
356

Lianmin Zheng's avatar
Lianmin Zheng committed
357
```python
Lianmin Zheng's avatar
Lianmin Zheng committed
358
@sgl.function
Lianmin Zheng's avatar
Lianmin Zheng committed
359
360
def regular_expression_gen(s):
    s += "Q: What is the IP address of the Google DNS servers?\n"
Lianmin Zheng's avatar
Lianmin Zheng committed
361
    s += "A: " + sgl.gen(
Lianmin Zheng's avatar
Lianmin Zheng committed
362
363
364
365
366
        "answer",
        temperature=0,
        regex=r"((25[0-5]|2[0-4]\d|[01]?\d\d?).){3}(25[0-5]|2[0-4]\d|[01]?\d\d?)",
    )
```
Lianmin Zheng's avatar
Lianmin Zheng committed
367

Ying Sheng's avatar
Ying Sheng committed
368
#### JSON Decoding
Lianmin Zheng's avatar
Lianmin Zheng committed
369
Use `regex` to specify a JSON schema with a regular expression.
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390

```python
character_regex = (
    r"""\{\n"""
    + r"""    "name": "[\w\d\s]{1,16}",\n"""
    + r"""    "house": "(Gryffindor|Slytherin|Ravenclaw|Hufflepuff)",\n"""
    + r"""    "blood status": "(Pure-blood|Half-blood|Muggle-born)",\n"""
    + r"""    "occupation": "(student|teacher|auror|ministry of magic|death eater|order of the phoenix)",\n"""
    + r"""    "wand": \{\n"""
    + r"""        "wood": "[\w\d\s]{1,16}",\n"""
    + r"""        "core": "[\w\d\s]{1,16}",\n"""
    + r"""        "length": [0-9]{1,2}\.[0-9]{0,2}\n"""
    + r"""    \},\n"""
    + r"""    "alive": "(Alive|Deceased)",\n"""
    + r"""    "patronus": "[\w\d\s]{1,16}",\n"""
    + r"""    "bogart": "[\w\d\s]{1,16}"\n"""
    + r"""\}"""
)

@sgl.function
def character_gen(s, name):
Lianmin Zheng's avatar
Lianmin Zheng committed
391
    s += name + " is a character in Harry Potter. Please fill in the following information about this character.\n"
392
393
394
    s += sgl.gen("json_output", max_tokens=256, regex=character_regex)
```

Lianmin Zheng's avatar
Lianmin Zheng committed
395
See also [json_decode.py](examples/usage/json_decode.py) for an additional example on specifying formats with Pydantic models.
396

Ying Sheng's avatar
Ying Sheng committed
397
#### Batching
Lianmin Zheng's avatar
Lianmin Zheng committed
398
399
Use `run_batch` to run a batch of requests with continuous batching.

Lianmin Zheng's avatar
Lianmin Zheng committed
400
401
402
403
404
405
406
407
408
409
410
411
```python
@sgl.function
def text_qa(s, question):
    s += "Q: " + question + "\n"
    s += "A:" + sgl.gen("answer", stop="\n")

states = text_qa.run_batch(
    [
        {"question": "What is the capital of the United Kingdom?"},
        {"question": "What is the capital of France?"},
        {"question": "What is the capital of Japan?"},
    ],
Lianmin Zheng's avatar
Lianmin Zheng committed
412
    progress_bar=True
Lianmin Zheng's avatar
Lianmin Zheng committed
413
414
)
```
Lianmin Zheng's avatar
Lianmin Zheng committed
415

Ying Sheng's avatar
Ying Sheng committed
416
#### Streaming
Lianmin Zheng's avatar
Lianmin Zheng committed
417
418
Add `stream=True` to enable streaming.

Lianmin Zheng's avatar
Lianmin Zheng committed
419
420
421
422
423
424
```python
@sgl.function
def text_qa(s, question):
    s += "Q: " + question + "\n"
    s += "A:" + sgl.gen("answer", stop="\n")

425
state = text_qa.run(
Lianmin Zheng's avatar
Lianmin Zheng committed
426
    question="What is the capital of France?",
Lianmin Zheng's avatar
Lianmin Zheng committed
427
428
429
    temperature=0.1,
    stream=True
)
Lianmin Zheng's avatar
Lianmin Zheng committed
430

Lianmin Zheng's avatar
Lianmin Zheng committed
431
432
433
for out in state.text_iter():
    print(out, end="", flush=True)
```
Lianmin Zheng's avatar
Lianmin Zheng committed
434

Ying Sheng's avatar
Ying Sheng committed
435
#### Tips and Implementation Details
436
437
- The `choices` argument in `sgl.gen` is implemented by computing the [token-length normalized log probabilities](https://blog.eleuther.ai/multiple-choice-normalization/) of all choices and selecting the one with the highest probability.
- The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex. It is compatible with `temperature=0` and `temperature != 0`.
Lianmin Zheng's avatar
Lianmin Zheng committed
438

Lianmin Zheng's avatar
Lianmin Zheng committed
439

Ying Sheng's avatar
Ying Sheng committed
440
441
442
## Benchmark And Performance
![8b_throughput](https://lmsys.org/images/blog/sglang_llama3/8b_throughput.svg)
![70b_fp8_throughput](https://lmsys.org/images/blog/sglang_llama3/70b_fp8_throughput.svg)
Lianmin Zheng's avatar
Lianmin Zheng committed
443

Ying Sheng's avatar
Ying Sheng committed
444
Learn more at this [blog](https://lmsys.org/blog/2024-07-25-sglang-llama3/).
Lianmin Zheng's avatar
Lianmin Zheng committed
445

Lianmin Zheng's avatar
Lianmin Zheng committed
446
## Roadmap
Ying Sheng's avatar
Ying Sheng committed
447
[Development Roadmap (2024 Q3)](https://github.com/sgl-project/sglang/issues/634)
Lianmin Zheng's avatar
Lianmin Zheng committed
448
449

## Citation And Acknowledgment
Ying Sheng's avatar
Ying Sheng committed
450
451
Please cite our paper, [SGLang: Efficient Execution of Structured Language Model Programs](https://arxiv.org/abs/2312.07104), if you find the project useful.
We also learned from the design and reused code from the following projects: [Guidance](https://github.com/guidance-ai/guidance), [vLLM](https://github.com/vllm-project/vllm), [LightLLM](https://github.com/ModelTC/lightllm), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [Outlines](https://github.com/outlines-dev/outlines), and [LMQL](https://github.com/eth-sri/lmql).