Unverified Commit 51c81e33 authored by Chayenne's avatar Chayenne Committed by GitHub
Browse files

Add openAI compatible API (#1810)


Co-authored-by: default avatarChayenne <zhaochenyang@g.ucla.edu>
parent eaade87a
...@@ -10,12 +10,17 @@ on: ...@@ -10,12 +10,17 @@ on:
workflow_dispatch: workflow_dispatch:
jobs: jobs:
execute-notebooks: execute-and-deploy:
runs-on: 1-gpu-runner runs-on: 1-gpu-runner
if: github.repository == 'sgl-project/sglang' if: github.repository == 'sgl-project/sglang'
defaults:
run:
working-directory: docs
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v3 uses: actions/checkout@v3
with:
path: .
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
...@@ -25,7 +30,9 @@ jobs: ...@@ -25,7 +30,9 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
bash scripts/ci_install_dependency.sh bash scripts/ci_install_dependency.sh
pip install -r docs/requirements.txt pip install -r requirements.txt
apt-get update
apt-get install -y pandoc
- name: Setup Jupyter Kernel - name: Setup Jupyter Kernel
run: | run: |
...@@ -33,7 +40,6 @@ jobs: ...@@ -33,7 +40,6 @@ jobs:
- name: Execute notebooks - name: Execute notebooks
run: | run: |
cd docs
for nb in *.ipynb; do for nb in *.ipynb; do
if [ -f "$nb" ]; then if [ -f "$nb" ]; then
echo "Executing $nb" echo "Executing $nb"
...@@ -43,36 +49,15 @@ jobs: ...@@ -43,36 +49,15 @@ jobs:
fi fi
done done
build-and-deploy:
needs: execute-notebooks
if: github.repository == 'sgl-project/sglang'
runs-on: 1-gpu-runner
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install dependencies
run: |
bash scripts/ci_install_dependency.sh
pip install -r docs/requirements.txt
apt-get update
apt-get install -y pandoc
- name: Build documentation - name: Build documentation
run: | run: |
cd docs
make html make html
- name: Push to sgl-project.github.io - name: Push to sgl-project.github.io
env: env:
GITHUB_TOKEN: ${{ secrets.PAT_TOKEN }} GITHUB_TOKEN: ${{ secrets.PAT_TOKEN }}
run: | run: |
cd docs/_build/html cd _build/html
git clone https://$GITHUB_TOKEN@github.com/sgl-project/sgl-project.github.io.git ../sgl-project.github.io git clone https://$GITHUB_TOKEN@github.com/sgl-project/sgl-project.github.io.git ../sgl-project.github.io
cp -r * ../sgl-project.github.io cp -r * ../sgl-project.github.io
cd ../sgl-project.github.io cd ../sgl-project.github.io
......
name: Execute Notebooks name: Execute Notebooks
on: on:
pull_request:
push: push:
branches: branches: [ main ]
- main paths:
- "python/sglang/**"
- "docs/**"
pull_request:
branches: [ main ]
paths:
- "python/sglang/**"
- "docs/**"
workflow_dispatch: workflow_dispatch:
concurrency:
group: execute-notebook-${{ github.ref }}
cancel-in-progress: true
jobs: jobs:
run-all-notebooks: run-all-notebooks:
runs-on: 1-gpu-runner runs-on: 1-gpu-runner
......
...@@ -10,6 +10,9 @@ repos: ...@@ -10,6 +10,9 @@ repos:
rev: 24.10.0 rev: 24.10.0
hooks: hooks:
- id: black - id: black
additional_dependencies: ['.[jupyter]']
types: [python, jupyter]
types_or: [python, jupyter]
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0 rev: v5.0.0
......
...@@ -4,19 +4,32 @@ ...@@ -4,19 +4,32 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Embedding Model" "# Embedding Model\n",
"\n",
"SGLang supports embedding models in the same way as completion models. Here are some example models:\n",
"\n",
"- [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct)\n",
"- [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct)\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Launch A Server" "## Launch A Server\n",
"\n",
"The following code is equivalent to running this in the shell:\n",
"```bash\n",
"python -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-7B-instruct \\\n",
" --port 30010 --host 0.0.0.0 --is-embedding --log-level error\n",
"```\n",
"\n",
"Remember to add `--is-embedding` to the command."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 1, "execution_count": 7,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
...@@ -28,14 +41,14 @@ ...@@ -28,14 +41,14 @@
} }
], ],
"source": [ "source": [
"# Equivalent to running this in the shell:\n",
"# python -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-7B-instruct --port 30010 --host 0.0.0.0 --is-embedding --log-level error\n",
"from sglang.utils import execute_shell_command, wait_for_server, terminate_process\n", "from sglang.utils import execute_shell_command, wait_for_server, terminate_process\n",
"\n", "\n",
"embedding_process = execute_shell_command(\"\"\"\n", "embedding_process = execute_shell_command(\n",
" \"\"\"\n",
"python -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-7B-instruct \\\n", "python -m sglang.launch_server --model-path Alibaba-NLP/gte-Qwen2-7B-instruct \\\n",
" --port 30010 --host 0.0.0.0 --is-embedding --log-level error\n", " --port 30010 --host 0.0.0.0 --is-embedding --log-level error\n",
"\"\"\")\n", "\"\"\"\n",
")\n",
"\n", "\n",
"wait_for_server(\"http://localhost:30010\")\n", "wait_for_server(\"http://localhost:30010\")\n",
"\n", "\n",
...@@ -51,25 +64,32 @@ ...@@ -51,25 +64,32 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 8,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"[0.0083160400390625, 0.0006804466247558594, -0.00809478759765625, -0.0006995201110839844, 0.0143890380859375, -0.0090179443359375, 0.01238250732421875, 0.00209808349609375, 0.0062103271484375, -0.003047943115234375]\n" "Text embedding (first 10): [0.0083160400390625, 0.0006804466247558594, -0.00809478759765625, -0.0006995201110839844, 0.0143890380859375, -0.0090179443359375, 0.01238250732421875, 0.00209808349609375, 0.0062103271484375, -0.003047943115234375]\n"
] ]
} }
], ],
"source": [ "source": [
"# Get the first 10 elements of the embedding\n", "import subprocess, json\n",
"\n",
"text = \"Once upon a time\"\n",
"\n", "\n",
"! curl -s http://localhost:30010/v1/embeddings \\\n", "curl_text = f\"\"\"curl -s http://localhost:30010/v1/embeddings \\\n",
" -H \"Content-Type: application/json\" \\\n", " -H \"Content-Type: application/json\" \\\n",
" -H \"Authorization: Bearer None\" \\\n", " -H \"Authorization: Bearer None\" \\\n",
" -d '{\"model\": \"Alibaba-NLP/gte-Qwen2-7B-instruct\", \"input\": \"Once upon a time\"}' \\\n", " -d '{{\"model\": \"Alibaba-NLP/gte-Qwen2-7B-instruct\", \"input\": \"{text}\"}}'\"\"\"\n",
" | python3 -c \"import sys, json; print(json.load(sys.stdin)['data'][0]['embedding'][:10])\"" "\n",
"text_embedding = json.loads(subprocess.check_output(curl_text, shell=True))[\"data\"][0][\n",
" \"embedding\"\n",
"]\n",
"\n",
"print(f\"Text embedding (first 10): {text_embedding[:10]}\")"
] ]
}, },
{ {
...@@ -81,37 +101,79 @@ ...@@ -81,37 +101,79 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 9,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"[0.00603485107421875, -0.0190582275390625, -0.01273345947265625, 0.01552581787109375, 0.0066680908203125, -0.0135955810546875, 0.01131439208984375, 0.0013713836669921875, -0.0089874267578125, 0.021759033203125]\n" "Text embedding (first 10): [0.00829315185546875, 0.0007004737854003906, -0.00809478759765625, -0.0006799697875976562, 0.01438140869140625, -0.00897979736328125, 0.0123748779296875, 0.0020923614501953125, 0.006195068359375, -0.0030498504638671875]\n"
] ]
} }
], ],
"source": [ "source": [
"import openai\n", "import openai\n",
"\n", "\n",
"client = openai.Client(\n", "client = openai.Client(base_url=\"http://127.0.0.1:30010/v1\", api_key=\"None\")\n",
" base_url=\"http://127.0.0.1:30010/v1\", api_key=\"None\"\n",
")\n",
"\n", "\n",
"# Text embedding example\n", "# Text embedding example\n",
"response = client.embeddings.create(\n", "response = client.embeddings.create(\n",
" model=\"Alibaba-NLP/gte-Qwen2-7B-instruct\",\n", " model=\"Alibaba-NLP/gte-Qwen2-7B-instruct\",\n",
" input=\"How are you today\",\n", " input=text,\n",
")\n", ")\n",
"\n", "\n",
"embedding = response.data[0].embedding[:10]\n", "embedding = response.data[0].embedding[:10]\n",
"print(embedding)" "print(f\"Text embedding (first 10): {embedding}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using Input IDs\n",
"\n",
"SGLang also supports `input_ids` as input to get the embedding."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Input IDs embedding (first 10): [0.00829315185546875, 0.0007004737854003906, -0.00809478759765625, -0.0006799697875976562, 0.01438140869140625, -0.00897979736328125, 0.0123748779296875, 0.0020923614501953125, 0.006195068359375, -0.0030498504638671875]\n"
]
}
],
"source": [
"import json\n",
"import os\n",
"from transformers import AutoTokenizer\n",
"\n",
"os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(\"Alibaba-NLP/gte-Qwen2-7B-instruct\")\n",
"input_ids = tokenizer.encode(text)\n",
"\n",
"curl_ids = f\"\"\"curl -s http://localhost:30010/v1/embeddings \\\n",
" -H \"Content-Type: application/json\" \\\n",
" -H \"Authorization: Bearer None\" \\\n",
" -d '{{\"model\": \"Alibaba-NLP/gte-Qwen2-7B-instruct\", \"input\": {json.dumps(input_ids)}}}'\"\"\"\n",
"\n",
"input_ids_embedding = json.loads(subprocess.check_output(curl_ids, shell=True))[\"data\"][\n",
" 0\n",
"][\"embedding\"]\n",
"\n",
"print(f\"Input IDs embedding (first 10): {input_ids_embedding[:10]}\")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 11,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
......
...@@ -16,12 +16,14 @@ The core features include: ...@@ -16,12 +16,14 @@ The core features include:
:caption: Getting Started :caption: Getting Started
install.md install.md
send_request.ipynb
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Backend Tutorial :caption: Backend Tutorial
openai_api.ipynb
backend.md backend.md
...@@ -43,3 +45,4 @@ The core features include: ...@@ -43,3 +45,4 @@ The core features include:
choices_methods.md choices_methods.md
benchmark_and_profiling.md benchmark_and_profiling.md
troubleshooting.md troubleshooting.md
embedding_model.ipynb
This diff is collapsed.
...@@ -4,7 +4,9 @@ ...@@ -4,7 +4,9 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Quick Start" "# Quick Start: Launch A Server and Send Requests\n",
"\n",
"This section provides a quick start guide to using SGLang after installation."
] ]
}, },
{ {
...@@ -13,12 +15,13 @@ ...@@ -13,12 +15,13 @@
"source": [ "source": [
"## Launch a server\n", "## Launch a server\n",
"\n", "\n",
"This code uses `subprocess.Popen` to start an SGLang server process, equivalent to executing \n", "This code block is equivalent to executing \n",
"\n", "\n",
"```bash\n", "```bash\n",
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n", "python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
"--port 30000 --host 0.0.0.0 --log-level warning\n", "--port 30000 --host 0.0.0.0 --log-level warning\n",
"```\n", "```\n",
"\n",
"in your command line and wait for the server to be ready." "in your command line and wait for the server to be ready."
] ]
}, },
...@@ -39,10 +42,12 @@ ...@@ -39,10 +42,12 @@
"from sglang.utils import execute_shell_command, wait_for_server, terminate_process\n", "from sglang.utils import execute_shell_command, wait_for_server, terminate_process\n",
"\n", "\n",
"\n", "\n",
"server_process = execute_shell_command(\"\"\"\n", "server_process = execute_shell_command(\n",
" \"\"\"\n",
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n", "python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
"--port 30000 --host 0.0.0.0 --log-level warning\n", "--port 30000 --host 0.0.0.0 --log-level warning\n",
"\"\"\")\n", "\"\"\"\n",
")\n",
"\n", "\n",
"wait_for_server(\"http://localhost:30000\")\n", "wait_for_server(\"http://localhost:30000\")\n",
"print(\"Server is ready. Proceeding with the next steps.\")" "print(\"Server is ready. Proceeding with the next steps.\")"
...@@ -105,9 +110,7 @@ ...@@ -105,9 +110,7 @@
"# Always assign an api_key, even if not specified during server initialization.\n", "# Always assign an api_key, even if not specified during server initialization.\n",
"# Setting an API key during server initialization is strongly recommended.\n", "# Setting an API key during server initialization is strongly recommended.\n",
"\n", "\n",
"client = openai.Client(\n", "client = openai.Client(base_url=\"http://127.0.0.1:30000/v1\", api_key=\"None\")\n",
" base_url=\"http://127.0.0.1:30000/v1\", api_key=\"None\"\n",
")\n",
"\n", "\n",
"# Chat completion example\n", "# Chat completion example\n",
"\n", "\n",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment