native_api.ipynb 14.4 KB
Newer Older
Chayenne's avatar
Chayenne committed
1
2
3
4
5
6
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Chayenne's avatar
Chayenne committed
7
    "# Native APIs\n",
Chayenne's avatar
Chayenne committed
8
    "\n",
Chayenne's avatar
Chayenne committed
9
    "Apart from the OpenAI compatible APIs, the SGLang Runtime also provides its native server APIs. We introduce these following APIs:\n",
Chayenne's avatar
Chayenne committed
10
    "\n",
Chayenne's avatar
Chayenne committed
11
    "- `/generate` (text generation model)\n",
Chayenne's avatar
Chayenne committed
12
13
14
15
16
17
    "- `/get_server_args`\n",
    "- `/get_model_info`\n",
    "- `/health`\n",
    "- `/health_generate`\n",
    "- `/flush_cache`\n",
    "- `/get_memory_pool_size`\n",
Chayenne's avatar
Chayenne committed
18
    "- `/update_weights`\n",
Chayenne's avatar
Chayenne committed
19
    "- `/encode`(embedding model)\n",
20
    "- `/classify`(reward model)\n",
Chayenne's avatar
Chayenne committed
21
22
23
24
25
26
27
28
29
30
31
32
33
34
    "\n",
    "We mainly use `requests` to test these APIs in the following examples. You can also use `curl`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Launch A Server"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
35
36
37
38
39
40
41
42
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:08.536886Z",
     "iopub.status.busy": "2024-11-05T05:08:08.536763Z",
     "iopub.status.idle": "2024-11-05T05:08:34.725831Z",
     "shell.execute_reply": "2024-11-05T05:08:34.725316Z"
    }
   },
Chayenne's avatar
Chayenne committed
43
44
45
46
47
48
49
50
51
   "outputs": [],
   "source": [
    "from sglang.utils import (\n",
    "    execute_shell_command,\n",
    "    wait_for_server,\n",
    "    terminate_process,\n",
    "    print_highlight,\n",
    ")\n",
    "\n",
Chayenne's avatar
Chayenne committed
52
53
    "import requests\n",
    "\n",
Chayenne's avatar
Chayenne committed
54
    "server_process = execute_shell_command(\n",
Chayenne's avatar
Chayenne committed
55
    "    \"\"\"\n",
Chayenne's avatar
Chayenne committed
56
57
58
59
60
61
62
63
64
65
66
    "python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-1B-Instruct --port=30010\n",
    "\"\"\"\n",
    ")\n",
    "\n",
    "wait_for_server(\"http://localhost:30010\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Chayenne's avatar
Chayenne committed
67
    "## Generate (text generation model)\n",
68
    "Generate completions. This is similar to the `/v1/completions` in OpenAI API. Detailed parameters can be found in the [sampling parameters](../references/sampling_params.md)."
Chayenne's avatar
Chayenne committed
69
70
71
72
73
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
74
75
76
77
78
79
80
81
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:34.727530Z",
     "iopub.status.busy": "2024-11-05T05:08:34.727333Z",
     "iopub.status.idle": "2024-11-05T05:08:35.359784Z",
     "shell.execute_reply": "2024-11-05T05:08:35.359090Z"
    }
   },
Chayenne's avatar
Chayenne committed
82
83
84
   "outputs": [],
   "source": [
    "url = \"http://localhost:30010/generate\"\n",
Chayenne's avatar
Chayenne committed
85
    "data = {\"text\": \"What is the capital of France?\"}\n",
Chayenne's avatar
Chayenne committed
86
87
    "\n",
    "response = requests.post(url, json=data)\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
88
    "print_highlight(response.json())"
Chayenne's avatar
Chayenne committed
89
90
91
92
93
94
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Chayenne's avatar
Chayenne committed
95
    "## Get Server Args\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
96
    "Get the arguments of a server."
Chayenne's avatar
Chayenne committed
97
98
99
100
101
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
102
103
104
105
106
107
108
109
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:35.362286Z",
     "iopub.status.busy": "2024-11-05T05:08:35.362140Z",
     "iopub.status.idle": "2024-11-05T05:08:35.368711Z",
     "shell.execute_reply": "2024-11-05T05:08:35.368220Z"
    }
   },
Chayenne's avatar
Chayenne committed
110
111
112
113
114
115
116
117
118
119
120
121
122
123
   "outputs": [],
   "source": [
    "url = \"http://localhost:30010/get_server_args\"\n",
    "\n",
    "response = requests.get(url)\n",
    "print_highlight(response.json())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Get Model Info\n",
    "\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
124
    "Get the information of the model.\n",
Chayenne's avatar
Chayenne committed
125
126
127
128
129
130
131
132
    "\n",
    "- `model_path`: The path/name of the model.\n",
    "- `is_generation`: Whether the model is used as generation model or embedding model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
133
134
135
136
137
138
139
140
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:35.371313Z",
     "iopub.status.busy": "2024-11-05T05:08:35.370877Z",
     "iopub.status.idle": "2024-11-05T05:08:35.376712Z",
     "shell.execute_reply": "2024-11-05T05:08:35.376230Z"
    }
   },
Chayenne's avatar
Chayenne committed
141
142
143
144
145
146
147
148
   "outputs": [],
   "source": [
    "url = \"http://localhost:30010/get_model_info\"\n",
    "\n",
    "response = requests.get(url)\n",
    "response_json = response.json()\n",
    "print_highlight(response_json)\n",
    "assert response_json[\"model_path\"] == \"meta-llama/Llama-3.2-1B-Instruct\"\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
149
    "assert response_json[\"is_generation\"] is True\n",
Chayenne's avatar
Chayenne committed
150
151
152
153
154
155
156
    "assert response_json.keys() == {\"model_path\", \"is_generation\"}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Lianmin Zheng's avatar
Lianmin Zheng committed
157
    "## Health Check\n",
Chayenne's avatar
Chayenne committed
158
159
160
161
162
163
164
    "- `/health`: Check the health of the server.\n",
    "- `/health_generate`: Check the health of the server by generating one token."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
165
166
167
168
169
170
171
172
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:35.378982Z",
     "iopub.status.busy": "2024-11-05T05:08:35.378597Z",
     "iopub.status.idle": "2024-11-05T05:08:35.391820Z",
     "shell.execute_reply": "2024-11-05T05:08:35.391336Z"
    }
   },
Chayenne's avatar
Chayenne committed
173
174
175
176
177
178
179
180
181
182
183
   "outputs": [],
   "source": [
    "url = \"http://localhost:30010/health_generate\"\n",
    "\n",
    "response = requests.get(url)\n",
    "print_highlight(response.text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
184
185
186
187
188
189
190
191
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:35.393748Z",
     "iopub.status.busy": "2024-11-05T05:08:35.393606Z",
     "iopub.status.idle": "2024-11-05T05:08:35.398645Z",
     "shell.execute_reply": "2024-11-05T05:08:35.398145Z"
    }
   },
Chayenne's avatar
Chayenne committed
192
193
194
195
196
197
198
199
200
201
202
203
204
205
   "outputs": [],
   "source": [
    "url = \"http://localhost:30010/health\"\n",
    "\n",
    "response = requests.get(url)\n",
    "print_highlight(response.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Flush Cache\n",
    "\n",
Lianmin Zheng's avatar
Lianmin Zheng committed
206
    "Flush the radix cache. It will be automatically triggered when the model weights are updated by the `/update_weights` API."
Chayenne's avatar
Chayenne committed
207
208
209
210
211
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
212
213
214
215
216
217
218
219
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:35.400683Z",
     "iopub.status.busy": "2024-11-05T05:08:35.400419Z",
     "iopub.status.idle": "2024-11-05T05:08:35.406146Z",
     "shell.execute_reply": "2024-11-05T05:08:35.405661Z"
    }
   },
Chayenne's avatar
Chayenne committed
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
   "outputs": [],
   "source": [
    "# flush cache\n",
    "\n",
    "url = \"http://localhost:30010/flush_cache\"\n",
    "\n",
    "response = requests.post(url)\n",
    "print_highlight(response.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Get Memory Pool Size\n",
    "\n",
    "Get the memory pool size in number of tokens.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
242
243
244
245
246
247
248
249
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:35.408176Z",
     "iopub.status.busy": "2024-11-05T05:08:35.407884Z",
     "iopub.status.idle": "2024-11-05T05:08:35.413587Z",
     "shell.execute_reply": "2024-11-05T05:08:35.413108Z"
    }
   },
Chayenne's avatar
Chayenne committed
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
   "outputs": [],
   "source": [
    "# get_memory_pool_size\n",
    "\n",
    "url = \"http://localhost:30010/get_memory_pool_size\"\n",
    "\n",
    "response = requests.get(url)\n",
    "print_highlight(response.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Update Weights\n",
    "\n",
    "Update model weights without restarting the server. Use for continuous evaluation during training. Only applicable for models with the same architecture and parameter size."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
272
273
274
275
276
277
278
279
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:35.416090Z",
     "iopub.status.busy": "2024-11-05T05:08:35.415793Z",
     "iopub.status.idle": "2024-11-05T05:08:36.552549Z",
     "shell.execute_reply": "2024-11-05T05:08:36.551870Z"
    }
   },
Chayenne's avatar
Chayenne committed
280
281
282
283
284
285
286
287
288
   "outputs": [],
   "source": [
    "# successful update with same architecture and size\n",
    "\n",
    "url = \"http://localhost:30010/update_weights\"\n",
    "data = {\"model_path\": \"meta-llama/Llama-3.2-1B\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "print_highlight(response.text)\n",
289
    "assert response.json()[\"success\"] is True\n",
Chayenne's avatar
Chayenne committed
290
291
292
293
294
295
296
    "assert response.json()[\"message\"] == \"Succeeded to update model weights.\"\n",
    "assert response.json().keys() == {\"success\", \"message\"}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
297
298
299
300
301
302
303
304
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:36.554823Z",
     "iopub.status.busy": "2024-11-05T05:08:36.554680Z",
     "iopub.status.idle": "2024-11-05T05:08:38.053945Z",
     "shell.execute_reply": "2024-11-05T05:08:38.053034Z"
    }
   },
Chayenne's avatar
Chayenne committed
305
306
307
308
309
310
311
312
313
314
   "outputs": [],
   "source": [
    "# failed update with different parameter size\n",
    "\n",
    "url = \"http://localhost:30010/update_weights\"\n",
    "data = {\"model_path\": \"meta-llama/Llama-3.2-3B\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print_highlight(response_json)\n",
315
    "assert response_json[\"success\"] is False\n",
Chayenne's avatar
Chayenne committed
316
317
318
319
320
321
322
    "assert response_json[\"message\"] == (\n",
    "    \"Failed to update weights: The size of tensor a (2048) must match \"\n",
    "    \"the size of tensor b (3072) at non-singleton dimension 1.\\n\"\n",
    "    \"Rolling back to original weights.\"\n",
    ")"
   ]
  },
Chayenne's avatar
Chayenne committed
323
324
325
326
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
Chayenne's avatar
Chayenne committed
327
    "## Encode (embedding model)\n",
Chayenne's avatar
Chayenne committed
328
    "\n",
Chayenne's avatar
Chayenne committed
329
330
    "Encode text into embeddings. Note that this API is only available for [embedding models](openai_api_embeddings.html#openai-apis-embedding) and will raise an error for generation models.\n",
    "Therefore, we launch a new server to server an embedding model."
Chayenne's avatar
Chayenne committed
331
332
333
334
335
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
336
337
338
339
340
341
342
343
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:08:38.056783Z",
     "iopub.status.busy": "2024-11-05T05:08:38.056497Z",
     "iopub.status.idle": "2024-11-05T05:09:04.436030Z",
     "shell.execute_reply": "2024-11-05T05:09:04.435311Z"
    }
   },
Chayenne's avatar
Chayenne committed
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
   "outputs": [],
   "source": [
    "terminate_process(server_process)\n",
    "\n",
    "embedding_process = execute_shell_command(\n",
    "    \"\"\"\n",
    "python -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-7B-instruct \\\n",
    "    --port 30020 --host 0.0.0.0 --is-embedding\n",
    "\"\"\"\n",
    ")\n",
    "\n",
    "wait_for_server(\"http://localhost:30020\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
361
362
363
364
365
366
367
368
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:09:04.438987Z",
     "iopub.status.busy": "2024-11-05T05:09:04.438568Z",
     "iopub.status.idle": "2024-11-05T05:09:04.485291Z",
     "shell.execute_reply": "2024-11-05T05:09:04.484829Z"
    }
   },
Chayenne's avatar
Chayenne committed
369
370
371
372
373
374
375
376
377
378
379
380
   "outputs": [],
   "source": [
    "# successful encode for embedding model\n",
    "\n",
    "url = \"http://localhost:30020/encode\"\n",
    "data = {\"model\": \"Alibaba-NLP/gte-Qwen2-7B-instruct\", \"text\": \"Once upon a time\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print_highlight(f\"Text embedding (first 10): {response_json['embedding'][:10]}\")"
   ]
  },
Chayenne's avatar
Chayenne committed
381
382
383
384
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
385
    "## Classify (reward model)\n",
Chayenne's avatar
Chayenne committed
386
    "\n",
387
    "SGLang Runtime also supports reward models. Here we use a reward model to classify the quality of pairwise generations."
Chayenne's avatar
Chayenne committed
388
389
390
391
392
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
393
394
395
396
397
398
399
400
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:09:04.487191Z",
     "iopub.status.busy": "2024-11-05T05:09:04.486929Z",
     "iopub.status.idle": "2024-11-05T05:09:25.553481Z",
     "shell.execute_reply": "2024-11-05T05:09:25.552747Z"
    }
   },
Chayenne's avatar
Chayenne committed
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
   "outputs": [],
   "source": [
    "terminate_process(embedding_process)\n",
    "\n",
    "# Note that SGLang now treats embedding models and reward models as the same type of models.\n",
    "# This will be updated in the future.\n",
    "\n",
    "reward_process = execute_shell_command(\n",
    "    \"\"\"\n",
    "python -m sglang.launch_server --model-path Skywork/Skywork-Reward-Llama-3.1-8B-v0.2 --port 30030 --host 0.0.0.0 --is-embedding\n",
    "\"\"\"\n",
    ")\n",
    "\n",
    "wait_for_server(\"http://localhost:30030\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
420
421
422
423
424
425
426
427
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:09:25.555813Z",
     "iopub.status.busy": "2024-11-05T05:09:25.555666Z",
     "iopub.status.idle": "2024-11-05T05:09:26.354372Z",
     "shell.execute_reply": "2024-11-05T05:09:26.353693Z"
    }
   },
Chayenne's avatar
Chayenne committed
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
   "outputs": [],
   "source": [
    "from transformers import AutoTokenizer\n",
    "\n",
    "PROMPT = (\n",
    "    \"What is the range of the numeric output of a sigmoid node in a neural network?\"\n",
    ")\n",
    "\n",
    "RESPONSE1 = \"The output of a sigmoid node is bounded between -1 and 1.\"\n",
    "RESPONSE2 = \"The output of a sigmoid node is bounded between 0 and 1.\"\n",
    "\n",
    "CONVS = [\n",
    "    [{\"role\": \"user\", \"content\": PROMPT}, {\"role\": \"assistant\", \"content\": RESPONSE1}],\n",
    "    [{\"role\": \"user\", \"content\": PROMPT}, {\"role\": \"assistant\", \"content\": RESPONSE2}],\n",
    "]\n",
    "\n",
    "tokenizer = AutoTokenizer.from_pretrained(\"Skywork/Skywork-Reward-Llama-3.1-8B-v0.2\")\n",
    "prompts = tokenizer.apply_chat_template(CONVS, tokenize=False)\n",
    "\n",
447
    "url = \"http://localhost:30030/classify\"\n",
Chayenne's avatar
Chayenne committed
448
449
450
451
452
453
454
455
456
457
    "data = {\n",
    "    \"model\": \"Skywork/Skywork-Reward-Llama-3.1-8B-v0.2\", \n",
    "    \"text\": prompts\n",
    "}\n",
    "\n",
    "responses = requests.post(url, json=data).json()\n",
    "for response in responses:\n",
    "    print_highlight(f\"reward: {response['embedding'][0]}\")"
   ]
  },
Chayenne's avatar
Chayenne committed
458
459
  {
   "cell_type": "code",
Chayenne's avatar
Chayenne committed
460
   "execution_count": 15,
461
462
463
464
465
466
467
468
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-11-05T05:09:26.356532Z",
     "iopub.status.busy": "2024-11-05T05:09:26.356327Z",
     "iopub.status.idle": "2024-11-05T05:09:26.396590Z",
     "shell.execute_reply": "2024-11-05T05:09:26.395914Z"
    }
   },
Chayenne's avatar
Chayenne committed
469
470
   "outputs": [],
   "source": [
Chayenne's avatar
Chayenne committed
471
    "terminate_process(reward_process)"
Chayenne's avatar
Chayenne committed
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "AlphaMeemory",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}