offline_engine_api.ipynb 7.92 KB
Newer Older
Chayenne's avatar
Chayenne committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Offline Engine API\n",
    "\n",
    "SGLang provides a direct inference engine without the need for an HTTP server, especially for use cases where additional HTTP server adds unnecessary complexity or overhead. Here are two general use cases:\n",
    "\n",
    "- Offline Batch Inference\n",
    "- Custom Server on Top of the Engine\n",
    "\n",
    "This document focuses on the offline batch inference, demonstrating four different inference modes:\n",
    "\n",
    "- Non-streaming synchronous generation\n",
    "- Streaming synchronous generation\n",
    "- Non-streaming asynchronous generation\n",
    "- Streaming asynchronous generation\n",
    "\n",
21
22
    "****To launch the offline engine, `__main__` condition is necessary in your own script because we use \"spawn\" to create subprocesses, for more details please refer to [launch_engine](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/launch_engine.py).****\n",
    "\n",
Chayenne's avatar
Chayenne committed
23
24
25
26
27
28
29
30
31
    "Additionally, you can easily build a custom server on top of the SGLang offline engine. A detailed example working in a python script can be found in [custom_server](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/custom_server.py)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Offline Batch Inference\n",
    "\n",
32
    "SGLang offline engine supports batch inference with efficient scheduling."
Chayenne's avatar
Chayenne committed
33
34
35
36
37
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
38
   "metadata": {},
Chayenne's avatar
Chayenne committed
39
40
41
   "outputs": [],
   "source": [
    "# launch the offline engine\n",
42
    "from sglang.utils import stream_and_merge, async_stream_and_merge\n",
Chayenne's avatar
Chayenne committed
43
44
    "import sglang as sgl\n",
    "import asyncio\n",
45
46
47
48
49
    "from sglang.test.test_utils import is_in_ci\n",
    "\n",
    "if is_in_ci():\n",
    "    import patch\n",
    "\n",
Chayenne's avatar
Chayenne committed
50
51
52
53
54
55
56
57
58
59
60
61
62
    "llm = sgl.Engine(model_path=\"meta-llama/Meta-Llama-3.1-8B-Instruct\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Non-streaming Synchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
63
   "metadata": {},
Chayenne's avatar
Chayenne committed
64
65
66
67
68
69
70
71
72
73
74
75
76
   "outputs": [],
   "source": [
    "prompts = [\n",
    "    \"Hello, my name is\",\n",
    "    \"The president of the United States is\",\n",
    "    \"The capital of France is\",\n",
    "    \"The future of AI is\",\n",
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
    "outputs = llm.generate(prompts, sampling_params)\n",
    "for prompt, output in zip(prompts, outputs):\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
77
78
    "    print(\"===============================\")\n",
    "    print(f\"Prompt: {prompt}\\nGenerated text: {output['text']}\")"
Chayenne's avatar
Chayenne committed
79
80
81
82
83
84
85
86
87
88
89
90
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Streaming Synchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
91
   "metadata": {},
Chayenne's avatar
Chayenne committed
92
93
94
   "outputs": [],
   "source": [
    "prompts = [\n",
95
96
97
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
98
99
    "]\n",
    "\n",
100
101
102
103
    "sampling_params = {\n",
    "    \"temperature\": 0.2,\n",
    "    \"top_p\": 0.9,\n",
    "}\n",
Chayenne's avatar
Chayenne committed
104
    "\n",
105
    "print(\"\\n=== Testing synchronous streaming generation with overlap removal ===\\n\")\n",
Chayenne's avatar
Chayenne committed
106
    "\n",
107
108
109
110
    "for prompt in prompts:\n",
    "    print(f\"Prompt: {prompt}\")\n",
    "    merged_output = stream_and_merge(llm, prompt, sampling_params)\n",
    "    print(\"Generated text:\", merged_output)\n",
Chayenne's avatar
Chayenne committed
111
112
113
114
115
116
117
118
119
120
121
122
123
    "    print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Non-streaming Asynchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
124
   "metadata": {},
Chayenne's avatar
Chayenne committed
125
126
127
   "outputs": [],
   "source": [
    "prompts = [\n",
128
129
130
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
131
132
133
134
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
135
    "print(\"\\n=== Testing asynchronous batch generation ===\")\n",
Chayenne's avatar
Chayenne committed
136
137
138
139
140
141
    "\n",
    "\n",
    "async def main():\n",
    "    outputs = await llm.async_generate(prompts, sampling_params)\n",
    "\n",
    "    for prompt, output in zip(prompts, outputs):\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
142
143
    "        print(f\"\\nPrompt: {prompt}\")\n",
    "        print(f\"Generated text: {output['text']}\")\n",
Chayenne's avatar
Chayenne committed
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
    "\n",
    "\n",
    "asyncio.run(main())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Streaming Asynchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
159
   "metadata": {},
Chayenne's avatar
Chayenne committed
160
161
162
   "outputs": [],
   "source": [
    "prompts = [\n",
163
164
165
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
166
    "]\n",
167
    "\n",
Chayenne's avatar
Chayenne committed
168
169
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
170
    "print(\"\\n=== Testing asynchronous streaming generation (no repeats) ===\")\n",
Chayenne's avatar
Chayenne committed
171
172
173
174
    "\n",
    "\n",
    "async def main():\n",
    "    for prompt in prompts:\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
175
    "        print(f\"\\nPrompt: {prompt}\")\n",
Chayenne's avatar
Chayenne committed
176
177
    "        print(\"Generated text: \", end=\"\", flush=True)\n",
    "\n",
178
179
180
181
182
    "        # Replace direct calls to async_generate with our custom overlap-aware version\n",
    "        async for cleaned_chunk in async_stream_and_merge(llm, prompt, sampling_params):\n",
    "            print(cleaned_chunk, end=\"\", flush=True)\n",
    "\n",
    "        print()  # New line after each prompt\n",
Chayenne's avatar
Chayenne committed
183
184
185
186
187
    "\n",
    "\n",
    "asyncio.run(main())"
   ]
  },
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm.shutdown()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Return Hidden States"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = sgl.Engine(\n",
    "    model_path=\"meta-llama/Meta-Llama-3.1-8B-Instruct\", return_hidden_states=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompts = [\n",
    "    \"Hello, my name is\",\n",
    "    \"The president of the United States is\",\n",
    "    \"The capital of France is\",\n",
    "    \"The future of AI is\",\n",
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95, \"max_new_tokens\": 10}\n",
    "\n",
    "outputs = llm.generate(prompts, sampling_params=sampling_params)\n",
    "for prompt, output in zip(prompts, outputs):\n",
    "    print(\"===============================\")\n",
    "    print(\n",
    "        f\"Prompt: {prompt}\\nGenerated text: {output['text']}\\nPrompt_Tokens: {output['meta_info']['prompt_tokens']}\\tCompletion_tokens: {output['meta_info']['completion_tokens']}\\nHidden states: {[i.shape for i in output['meta_info']['hidden_states']]}\"\n",
    "    )\n",
    "    print()"
   ]
  },
Chayenne's avatar
Chayenne committed
239
240
  {
   "cell_type": "code",
241
242
   "execution_count": null,
   "metadata": {},
Chayenne's avatar
Chayenne committed
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
   "outputs": [],
   "source": [
    "llm.shutdown()"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
259
   "pygments_lexer": "ipython3"
Chayenne's avatar
Chayenne committed
260
261
262
263
264
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}