offline_engine_api.ipynb 7.64 KB
Newer Older
Chayenne's avatar
Chayenne committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Offline Engine API\n",
    "\n",
    "SGLang provides a direct inference engine without the need for an HTTP server, especially for use cases where additional HTTP server adds unnecessary complexity or overhead. Here are two general use cases:\n",
    "\n",
    "- Offline Batch Inference\n",
    "- Custom Server on Top of the Engine\n",
    "\n",
    "This document focuses on the offline batch inference, demonstrating four different inference modes:\n",
    "\n",
    "- Non-streaming synchronous generation\n",
    "- Streaming synchronous generation\n",
    "- Non-streaming asynchronous generation\n",
    "- Streaming asynchronous generation\n",
    "\n",
    "Additionally, you can easily build a custom server on top of the SGLang offline engine. A detailed example working in a python script can be found in [custom_server](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/custom_server.py)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Offline Batch Inference\n",
    "\n",
30
    "SGLang offline engine supports batch inference with efficient scheduling."
Chayenne's avatar
Chayenne committed
31
32
33
34
35
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
36
   "metadata": {},
Chayenne's avatar
Chayenne committed
37
38
39
   "outputs": [],
   "source": [
    "# launch the offline engine\n",
40
    "from sglang.utils import stream_and_merge, async_stream_and_merge\n",
Chayenne's avatar
Chayenne committed
41
42
    "import sglang as sgl\n",
    "import asyncio\n",
43
44
45
46
47
    "from sglang.test.test_utils import is_in_ci\n",
    "\n",
    "if is_in_ci():\n",
    "    import patch\n",
    "\n",
Chayenne's avatar
Chayenne committed
48
49
50
51
52
53
54
55
56
57
58
59
60
61
    "\n",
    "llm = sgl.Engine(model_path=\"meta-llama/Meta-Llama-3.1-8B-Instruct\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Non-streaming Synchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
62
   "metadata": {},
Chayenne's avatar
Chayenne committed
63
64
65
66
67
68
69
70
71
72
73
74
75
   "outputs": [],
   "source": [
    "prompts = [\n",
    "    \"Hello, my name is\",\n",
    "    \"The president of the United States is\",\n",
    "    \"The capital of France is\",\n",
    "    \"The future of AI is\",\n",
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
    "outputs = llm.generate(prompts, sampling_params)\n",
    "for prompt, output in zip(prompts, outputs):\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
76
77
    "    print(\"===============================\")\n",
    "    print(f\"Prompt: {prompt}\\nGenerated text: {output['text']}\")"
Chayenne's avatar
Chayenne committed
78
79
80
81
82
83
84
85
86
87
88
89
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Streaming Synchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
90
   "metadata": {},
Chayenne's avatar
Chayenne committed
91
92
93
   "outputs": [],
   "source": [
    "prompts = [\n",
94
95
96
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
97
98
    "]\n",
    "\n",
99
100
101
102
    "sampling_params = {\n",
    "    \"temperature\": 0.2,\n",
    "    \"top_p\": 0.9,\n",
    "}\n",
Chayenne's avatar
Chayenne committed
103
    "\n",
104
    "print(\"\\n=== Testing synchronous streaming generation with overlap removal ===\\n\")\n",
Chayenne's avatar
Chayenne committed
105
    "\n",
106
107
108
109
    "for prompt in prompts:\n",
    "    print(f\"Prompt: {prompt}\")\n",
    "    merged_output = stream_and_merge(llm, prompt, sampling_params)\n",
    "    print(\"Generated text:\", merged_output)\n",
Chayenne's avatar
Chayenne committed
110
111
112
113
114
115
116
117
118
119
120
121
122
    "    print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Non-streaming Asynchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
123
   "metadata": {},
Chayenne's avatar
Chayenne committed
124
125
126
   "outputs": [],
   "source": [
    "prompts = [\n",
127
128
129
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
130
131
132
133
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
134
    "print(\"\\n=== Testing asynchronous batch generation ===\")\n",
Chayenne's avatar
Chayenne committed
135
136
137
138
139
140
    "\n",
    "\n",
    "async def main():\n",
    "    outputs = await llm.async_generate(prompts, sampling_params)\n",
    "\n",
    "    for prompt, output in zip(prompts, outputs):\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
141
142
    "        print(f\"\\nPrompt: {prompt}\")\n",
    "        print(f\"Generated text: {output['text']}\")\n",
Chayenne's avatar
Chayenne committed
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
    "\n",
    "\n",
    "asyncio.run(main())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Streaming Asynchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
158
   "metadata": {},
Chayenne's avatar
Chayenne committed
159
160
161
   "outputs": [],
   "source": [
    "prompts = [\n",
162
163
164
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
165
    "]\n",
166
    "\n",
Chayenne's avatar
Chayenne committed
167
168
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
169
    "print(\"\\n=== Testing asynchronous streaming generation (no repeats) ===\")\n",
Chayenne's avatar
Chayenne committed
170
171
172
173
    "\n",
    "\n",
    "async def main():\n",
    "    for prompt in prompts:\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
174
    "        print(f\"\\nPrompt: {prompt}\")\n",
Chayenne's avatar
Chayenne committed
175
176
    "        print(\"Generated text: \", end=\"\", flush=True)\n",
    "\n",
177
178
179
180
181
    "        # Replace direct calls to async_generate with our custom overlap-aware version\n",
    "        async for cleaned_chunk in async_stream_and_merge(llm, prompt, sampling_params):\n",
    "            print(cleaned_chunk, end=\"\", flush=True)\n",
    "\n",
    "        print()  # New line after each prompt\n",
Chayenne's avatar
Chayenne committed
182
183
184
185
186
    "\n",
    "\n",
    "asyncio.run(main())"
   ]
  },
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm.shutdown()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Return Hidden States"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = sgl.Engine(\n",
    "    model_path=\"meta-llama/Meta-Llama-3.1-8B-Instruct\", return_hidden_states=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prompts = [\n",
    "    \"Hello, my name is\",\n",
    "    \"The president of the United States is\",\n",
    "    \"The capital of France is\",\n",
    "    \"The future of AI is\",\n",
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95, \"max_new_tokens\": 10}\n",
    "\n",
    "outputs = llm.generate(prompts, sampling_params=sampling_params)\n",
    "for prompt, output in zip(prompts, outputs):\n",
    "    print(\"===============================\")\n",
    "    print(\n",
    "        f\"Prompt: {prompt}\\nGenerated text: {output['text']}\\nPrompt_Tokens: {output['meta_info']['prompt_tokens']}\\tCompletion_tokens: {output['meta_info']['completion_tokens']}\\nHidden states: {[i.shape for i in output['meta_info']['hidden_states']]}\"\n",
    "    )\n",
    "    print()"
   ]
  },
Chayenne's avatar
Chayenne committed
238
239
  {
   "cell_type": "code",
240
241
   "execution_count": null,
   "metadata": {},
Chayenne's avatar
Chayenne committed
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
   "outputs": [],
   "source": [
    "llm.shutdown()"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
258
   "pygments_lexer": "ipython3"
Chayenne's avatar
Chayenne committed
259
260
261
262
263
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}