offline_engine_api.ipynb 6.93 KB
Newer Older
Chayenne's avatar
Chayenne committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Offline Engine API\n",
    "\n",
    "SGLang provides a direct inference engine without the need for an HTTP server, especially for use cases where additional HTTP server adds unnecessary complexity or overhead. Here are two general use cases:\n",
    "\n",
    "- Offline Batch Inference\n",
    "- Custom Server on Top of the Engine\n",
    "\n",
    "This document focuses on the offline batch inference, demonstrating four different inference modes:\n",
    "\n",
    "- Non-streaming synchronous generation\n",
    "- Streaming synchronous generation\n",
    "- Non-streaming asynchronous generation\n",
    "- Streaming asynchronous generation\n",
    "\n",
21
    "Additionally, you can easily build a custom server on top of the SGLang offline engine. A detailed example working in a python script can be found in [custom_server](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/custom_server.py).\n",
22
    "\n"
Chayenne's avatar
Chayenne committed
23
24
   ]
  },
25
26
27
28
29
30
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Advanced Usage\n",
    "\n",
31
    "The engine supports [vlm inference](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/offline_batch_inference_vlm.py) as well as [extracting hidden states](https://github.com/sgl-project/sglang/blob/main/examples/runtime/hidden_states). \n",
32
33
34
35
    "\n",
    "Please see [the examples](https://github.com/sgl-project/sglang/tree/main/examples/runtime/engine) for further use cases."
   ]
  },
Chayenne's avatar
Chayenne committed
36
37
38
39
40
41
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Offline Batch Inference\n",
    "\n",
42
    "SGLang offline engine supports batch inference with efficient scheduling."
Chayenne's avatar
Chayenne committed
43
44
45
46
47
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
48
   "metadata": {},
Chayenne's avatar
Chayenne committed
49
50
51
52
   "outputs": [],
   "source": [
    "# launch the offline engine\n",
    "import asyncio\n",
53
54
55
56
57
58
59
60
    "import io\n",
    "import os\n",
    "\n",
    "from PIL import Image\n",
    "import requests\n",
    "import sglang as sgl\n",
    "\n",
    "from sglang.srt.conversation import chat_templates\n",
61
    "from sglang.test.test_utils import is_in_ci\n",
62
    "from sglang.utils import async_stream_and_merge, stream_and_merge\n",
63
64
65
66
    "\n",
    "if is_in_ci():\n",
    "    import patch\n",
    "\n",
67
    "\n",
Chayenne's avatar
Chayenne committed
68
69
70
71
72
73
74
75
76
77
78
79
80
    "llm = sgl.Engine(model_path=\"meta-llama/Meta-Llama-3.1-8B-Instruct\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Non-streaming Synchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
81
   "metadata": {},
Chayenne's avatar
Chayenne committed
82
83
84
85
86
87
88
89
90
91
92
93
94
   "outputs": [],
   "source": [
    "prompts = [\n",
    "    \"Hello, my name is\",\n",
    "    \"The president of the United States is\",\n",
    "    \"The capital of France is\",\n",
    "    \"The future of AI is\",\n",
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
    "outputs = llm.generate(prompts, sampling_params)\n",
    "for prompt, output in zip(prompts, outputs):\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
95
96
    "    print(\"===============================\")\n",
    "    print(f\"Prompt: {prompt}\\nGenerated text: {output['text']}\")"
Chayenne's avatar
Chayenne committed
97
98
99
100
101
102
103
104
105
106
107
108
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Streaming Synchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
109
   "metadata": {},
Chayenne's avatar
Chayenne committed
110
111
112
   "outputs": [],
   "source": [
    "prompts = [\n",
113
114
115
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
116
117
    "]\n",
    "\n",
118
119
120
121
    "sampling_params = {\n",
    "    \"temperature\": 0.2,\n",
    "    \"top_p\": 0.9,\n",
    "}\n",
Chayenne's avatar
Chayenne committed
122
    "\n",
123
    "print(\"\\n=== Testing synchronous streaming generation with overlap removal ===\\n\")\n",
Chayenne's avatar
Chayenne committed
124
    "\n",
125
126
127
128
    "for prompt in prompts:\n",
    "    print(f\"Prompt: {prompt}\")\n",
    "    merged_output = stream_and_merge(llm, prompt, sampling_params)\n",
    "    print(\"Generated text:\", merged_output)\n",
Chayenne's avatar
Chayenne committed
129
130
131
132
133
134
135
136
137
138
139
140
141
    "    print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Non-streaming Asynchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
142
   "metadata": {},
Chayenne's avatar
Chayenne committed
143
144
145
   "outputs": [],
   "source": [
    "prompts = [\n",
146
147
148
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
149
150
151
152
    "]\n",
    "\n",
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
153
    "print(\"\\n=== Testing asynchronous batch generation ===\")\n",
Chayenne's avatar
Chayenne committed
154
155
156
157
158
159
    "\n",
    "\n",
    "async def main():\n",
    "    outputs = await llm.async_generate(prompts, sampling_params)\n",
    "\n",
    "    for prompt, output in zip(prompts, outputs):\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
160
161
    "        print(f\"\\nPrompt: {prompt}\")\n",
    "        print(f\"Generated text: {output['text']}\")\n",
Chayenne's avatar
Chayenne committed
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
    "\n",
    "\n",
    "asyncio.run(main())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Streaming Asynchronous Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
177
   "metadata": {},
Chayenne's avatar
Chayenne committed
178
179
180
   "outputs": [],
   "source": [
    "prompts = [\n",
181
182
183
    "    \"Write a short, neutral self-introduction for a fictional character. Hello, my name is\",\n",
    "    \"Provide a concise factual statement about France’s capital city. The capital of France is\",\n",
    "    \"Explain possible future trends in artificial intelligence. The future of AI is\",\n",
Chayenne's avatar
Chayenne committed
184
    "]\n",
185
    "\n",
Chayenne's avatar
Chayenne committed
186
187
    "sampling_params = {\"temperature\": 0.8, \"top_p\": 0.95}\n",
    "\n",
188
    "print(\"\\n=== Testing asynchronous streaming generation (no repeats) ===\")\n",
Chayenne's avatar
Chayenne committed
189
190
191
192
    "\n",
    "\n",
    "async def main():\n",
    "    for prompt in prompts:\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
193
    "        print(f\"\\nPrompt: {prompt}\")\n",
Chayenne's avatar
Chayenne committed
194
195
    "        print(\"Generated text: \", end=\"\", flush=True)\n",
    "\n",
196
197
198
199
200
    "        # Replace direct calls to async_generate with our custom overlap-aware version\n",
    "        async for cleaned_chunk in async_stream_and_merge(llm, prompt, sampling_params):\n",
    "            print(cleaned_chunk, end=\"\", flush=True)\n",
    "\n",
    "        print()  # New line after each prompt\n",
Chayenne's avatar
Chayenne committed
201
202
203
204
205
206
207
    "\n",
    "\n",
    "asyncio.run(main())"
   ]
  },
  {
   "cell_type": "code",
208
209
   "execution_count": null,
   "metadata": {},
Chayenne's avatar
Chayenne committed
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
   "outputs": [],
   "source": [
    "llm.shutdown()"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
226
   "pygments_lexer": "ipython3"
Chayenne's avatar
Chayenne committed
227
228
229
230
231
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}