"docs/vscode:/vscode.git/clone" did not exist on "d50ce994213a264dfb746cd5e4ebc0f148f03b17"
Unverified Commit d913d52c authored by Lianmin Zheng's avatar Lianmin Zheng Committed by GitHub
Browse files

Fix warnings in doc build (#1852)

parent 0ab7bcaf
# Backend: SGLang Runtime (SRT) # Backend: SGLang Runtime (SRT)
The SGLang Runtime (SRT) is an efficient serving engine. The SGLang Runtime (SRT) is an efficient serving engine.
### Quick Start ## Quick Start
Launch a server Launch a server
``` ```
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000 python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
...@@ -22,7 +22,7 @@ curl http://localhost:30000/generate \ ...@@ -22,7 +22,7 @@ curl http://localhost:30000/generate \
Learn more about the argument specification, streaming, and multi-modal support [here](https://sgl-project.github.io/sampling_params.html). Learn more about the argument specification, streaming, and multi-modal support [here](https://sgl-project.github.io/sampling_params.html).
### OpenAI Compatible API ## OpenAI Compatible API
In addition, the server supports OpenAI-compatible APIs. In addition, the server supports OpenAI-compatible APIs.
```python ```python
...@@ -61,7 +61,7 @@ print(response) ...@@ -61,7 +61,7 @@ print(response)
It supports streaming, vision, and almost all features of the Chat/Completions/Models/Batch endpoints specified by the [OpenAI API Reference](https://platform.openai.com/docs/api-reference/). It supports streaming, vision, and almost all features of the Chat/Completions/Models/Batch endpoints specified by the [OpenAI API Reference](https://platform.openai.com/docs/api-reference/).
### Additional Server Arguments ## Additional Server Arguments
- To enable multi-GPU tensor parallelism, add `--tp 2`. If it reports the error "peer access is not supported between these two devices", add `--enable-p2p-check` to the server launch command. - To enable multi-GPU tensor parallelism, add `--tp 2`. If it reports the error "peer access is not supported between these two devices", add `--enable-p2p-check` to the server launch command.
``` ```
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 2 python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 2
...@@ -94,7 +94,7 @@ python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct ...@@ -94,7 +94,7 @@ python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 4 --nccl-init sgl-dev-0:50000 --nnodes 2 --node-rank 1 python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --tp 4 --nccl-init sgl-dev-0:50000 --nnodes 2 --node-rank 1
``` ```
### Engine Without HTTP Server ## Engine Without HTTP Server
We also provide an inference engine **without a HTTP server**. For example, We also provide an inference engine **without a HTTP server**. For example,
...@@ -123,7 +123,7 @@ if __name__ == "__main__": ...@@ -123,7 +123,7 @@ if __name__ == "__main__":
This can be used for offline batch inference and building custom servers. This can be used for offline batch inference and building custom servers.
You can view the full example [here](https://github.com/sgl-project/sglang/tree/main/examples/runtime/engine). You can view the full example [here](https://github.com/sgl-project/sglang/tree/main/examples/runtime/engine).
### Supported Models ## Supported Models
**Generative Models** **Generative Models**
- Llama / Llama 2 / Llama 3 / Llama 3.1 - Llama / Llama 2 / Llama 3 / Llama 3.1
...@@ -162,7 +162,7 @@ You can view the full example [here](https://github.com/sgl-project/sglang/tree/ ...@@ -162,7 +162,7 @@ You can view the full example [here](https://github.com/sgl-project/sglang/tree/
Instructions for supporting a new model are [here](https://sgl-project.github.io/model_support.html). Instructions for supporting a new model are [here](https://sgl-project.github.io/model_support.html).
#### Use Models From ModelScope ### Use Models From ModelScope
<details> <details>
<summary>More</summary> <summary>More</summary>
...@@ -188,7 +188,7 @@ docker run --gpus all \ ...@@ -188,7 +188,7 @@ docker run --gpus all \
</details> </details>
#### Run Llama 3.1 405B ### Run Llama 3.1 405B
<details> <details>
<summary>More</summary> <summary>More</summary>
...@@ -206,7 +206,7 @@ GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/ ...@@ -206,7 +206,7 @@ GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/
</details> </details>
### Benchmark Performance ## Benchmark Performance
- Benchmark a single static batch by running the following command without launching a server. The arguments are the same as for `launch_server.py`. - Benchmark a single static batch by running the following command without launching a server. The arguments are the same as for `launch_server.py`.
Note that this is not a dynamic batching server, so it may run out of memory for a batch size that a real server can handle. Note that this is not a dynamic batching server, so it may run out of memory for a batch size that a real server can handle.
......
# Frontend: Structured Generation Language (SGLang) # Frontend: Structured Generation Language (SGLang)
The frontend language can be used with local models or API models. It is an alternative to the OpenAI API. You may found it easier to use for complex prompting workflow. The frontend language can be used with local models or API models. It is an alternative to the OpenAI API. You may found it easier to use for complex prompting workflow.
### Quick Start ## Quick Start
The example below shows how to use SGLang to answer a multi-turn question. The example below shows how to use SGLang to answer a multi-turn question.
#### Using Local Models ### Using Local Models
First, launch a server with First, launch a server with
``` ```
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000 python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --port 30000
...@@ -36,7 +36,7 @@ for m in state.messages(): ...@@ -36,7 +36,7 @@ for m in state.messages():
print(state["answer_1"]) print(state["answer_1"])
``` ```
#### Using OpenAI Models ### Using OpenAI Models
Set the OpenAI API Key Set the OpenAI API Key
``` ```
export OPENAI_API_KEY=sk-****** export OPENAI_API_KEY=sk-******
...@@ -67,11 +67,11 @@ for m in state.messages(): ...@@ -67,11 +67,11 @@ for m in state.messages():
print(state["answer_1"]) print(state["answer_1"])
``` ```
#### More Examples ### More Examples
Anthropic and VertexAI (Gemini) models are also supported. Anthropic and VertexAI (Gemini) models are also supported.
You can find more examples at [examples/quick_start](https://github.com/sgl-project/sglang/tree/main/examples/frontend_language/quick_start). You can find more examples at [examples/quick_start](https://github.com/sgl-project/sglang/tree/main/examples/frontend_language/quick_start).
### Language Feature ## Language Feature
To begin with, import sglang. To begin with, import sglang.
```python ```python
import sglang as sgl import sglang as sgl
...@@ -84,7 +84,7 @@ The system will manage the state, chat template, parallelism and batching for yo ...@@ -84,7 +84,7 @@ The system will manage the state, chat template, parallelism and batching for yo
The complete code for the examples below can be found at [readme_examples.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/readme_examples.py) The complete code for the examples below can be found at [readme_examples.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/readme_examples.py)
#### Control Flow ### Control Flow
You can use any Python code within the function body, including control flow, nested function calls, and external libraries. You can use any Python code within the function body, including control flow, nested function calls, and external libraries.
```python ```python
...@@ -99,7 +99,7 @@ def tool_use(s, question): ...@@ -99,7 +99,7 @@ def tool_use(s, question):
s += "The key word to search is" + sgl.gen("word") s += "The key word to search is" + sgl.gen("word")
``` ```
#### Parallelism ### Parallelism
Use `fork` to launch parallel prompts. Use `fork` to launch parallel prompts.
Because `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel. Because `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel.
...@@ -121,7 +121,7 @@ def tip_suggestion(s): ...@@ -121,7 +121,7 @@ def tip_suggestion(s):
s += "In summary" + sgl.gen("summary") s += "In summary" + sgl.gen("summary")
``` ```
#### Multi-Modality ### Multi-Modality
Use `sgl.image` to pass an image as input. Use `sgl.image` to pass an image as input.
```python ```python
...@@ -133,7 +133,7 @@ def image_qa(s, image_file, question): ...@@ -133,7 +133,7 @@ def image_qa(s, image_file, question):
See also [local_example_llava_next.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/quick_start/local_example_llava_next.py). See also [local_example_llava_next.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/quick_start/local_example_llava_next.py).
#### Constrained Decoding ### Constrained Decoding
Use `regex` to specify a regular expression as a decoding constraint. Use `regex` to specify a regular expression as a decoding constraint.
This is only supported for local models. This is only supported for local models.
...@@ -148,7 +148,7 @@ def regular_expression_gen(s): ...@@ -148,7 +148,7 @@ def regular_expression_gen(s):
) )
``` ```
#### JSON Decoding ### JSON Decoding
Use `regex` to specify a JSON schema with a regular expression. Use `regex` to specify a JSON schema with a regular expression.
```python ```python
...@@ -177,7 +177,7 @@ def character_gen(s, name): ...@@ -177,7 +177,7 @@ def character_gen(s, name):
See also [json_decode.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/json_decode.py) for an additional example of specifying formats with Pydantic models. See also [json_decode.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/json_decode.py) for an additional example of specifying formats with Pydantic models.
#### Batching ### Batching
Use `run_batch` to run a batch of requests with continuous batching. Use `run_batch` to run a batch of requests with continuous batching.
```python ```python
...@@ -196,7 +196,7 @@ states = text_qa.run_batch( ...@@ -196,7 +196,7 @@ states = text_qa.run_batch(
) )
``` ```
#### Streaming ### Streaming
Add `stream=True` to enable streaming. Add `stream=True` to enable streaming.
```python ```python
...@@ -215,7 +215,7 @@ for out in state.text_iter(): ...@@ -215,7 +215,7 @@ for out in state.text_iter():
print(out, end="", flush=True) print(out, end="", flush=True)
``` ```
#### Roles ### Roles
Use `sgl.system``sgl.user` and `sgl.assistant` to set roles when using Chat models. You can also define more complex role prompts using begin and end tokens. Use `sgl.system``sgl.user` and `sgl.assistant` to set roles when using Chat models. You can also define more complex role prompts using begin and end tokens.
...@@ -233,6 +233,6 @@ def chat_example(s): ...@@ -233,6 +233,6 @@ def chat_example(s):
s += sgl.assistant_end() s += sgl.assistant_end()
``` ```
#### Tips and Implementation Details ### Tips and Implementation Details
- The `choices` argument in `sgl.gen` is implemented by computing the [token-length normalized log probabilities](https://blog.eleuther.ai/multiple-choice-normalization/) of all choices and selecting the one with the highest probability. - The `choices` argument in `sgl.gen` is implemented by computing the [token-length normalized log probabilities](https://blog.eleuther.ai/multiple-choice-normalization/) of all choices and selecting the one with the highest probability.
- The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex. It is compatible with `temperature=0` and `temperature != 0`. - The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex. It is compatible with `temperature=0` and `temperature != 0`.
# Install # Install SGLang
You can install SGLang using any of the methods below. You can install SGLang using any of the methods below.
### Method 1: With pip ## Method 1: With pip
``` ```
pip install --upgrade pip pip install --upgrade pip
pip install "sglang[all]" pip install "sglang[all]"
...@@ -13,7 +13,7 @@ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ ...@@ -13,7 +13,7 @@ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions. Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions.
### Method 2: From source ## Method 2: From source
``` ```
# Use the last release branch # Use the last release branch
git clone -b v0.3.4.post2 https://github.com/sgl-project/sglang.git git clone -b v0.3.4.post2 https://github.com/sgl-project/sglang.git
...@@ -28,7 +28,7 @@ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ ...@@ -28,7 +28,7 @@ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions. Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions.
### Method 3: Using docker ## Method 3: Using docker
The docker images are available on Docker Hub as [lmsysorg/sglang](https://hub.docker.com/r/lmsysorg/sglang/tags), built from [Dockerfile](https://github.com/sgl-project/sglang/tree/main/docker). The docker images are available on Docker Hub as [lmsysorg/sglang](https://hub.docker.com/r/lmsysorg/sglang/tags), built from [Dockerfile](https://github.com/sgl-project/sglang/tree/main/docker).
Replace `<secret>` below with your huggingface hub [token](https://huggingface.co/docs/hub/en/security-tokens). Replace `<secret>` below with your huggingface hub [token](https://huggingface.co/docs/hub/en/security-tokens).
...@@ -42,7 +42,7 @@ docker run --gpus all \ ...@@ -42,7 +42,7 @@ docker run --gpus all \
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000 python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
``` ```
### Method 4: Using docker compose ## Method 4: Using docker compose
<details> <details>
<summary>More</summary> <summary>More</summary>
...@@ -54,7 +54,7 @@ docker run --gpus all \ ...@@ -54,7 +54,7 @@ docker run --gpus all \
2. Execute the command `docker compose up -d` in your terminal. 2. Execute the command `docker compose up -d` in your terminal.
</details> </details>
### Method 5: Run on Kubernetes or Clouds with SkyPilot ## Method 5: Run on Kubernetes or Clouds with SkyPilot
<details> <details>
<summary>More</summary> <summary>More</summary>
...@@ -95,7 +95,7 @@ sky status --endpoint 30000 sglang ...@@ -95,7 +95,7 @@ sky status --endpoint 30000 sglang
3. To further scale up your deployment with autoscaling and failure recovery, check out the [SkyServe + SGLang guide](https://github.com/skypilot-org/skypilot/tree/master/llm/sglang#serving-llama-2-with-sglang-for-more-traffic-using-skyserve). 3. To further scale up your deployment with autoscaling and failure recovery, check out the [SkyServe + SGLang guide](https://github.com/skypilot-org/skypilot/tree/master/llm/sglang#serving-llama-2-with-sglang-for-more-traffic-using-skyserve).
</details> </details>
### Common Notes ## Common Notes
- [FlashInfer](https://github.com/flashinfer-ai/flashinfer) is the default attention kernel backend. It only supports sm75 and above. If you encounter any FlashInfer-related issues on sm75+ devices (e.g., T4, A10, A100, L4, L40S, H100), please switch to other kernels by adding `--attention-backend triton --sampling-backend pytorch` and open an issue on GitHub. - [FlashInfer](https://github.com/flashinfer-ai/flashinfer) is the default attention kernel backend. It only supports sm75 and above. If you encounter any FlashInfer-related issues on sm75+ devices (e.g., T4, A10, A100, L4, L40S, H100), please switch to other kernels by adding `--attention-backend triton --sampling-backend pytorch` and open an issue on GitHub.
- If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"`. - If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.
- The language frontend operates independently of the backend runtime. You can install the frontend locally without needing a GPU, while the backend can be set up on a GPU-enabled machine. To install the frontend, run `pip install sglang`, and for the backend, use `pip install sglang[srt]`. This allows you to build SGLang programs locally and execute them by connecting to the remote backend. - The language frontend operates independently of the backend runtime. You can install the frontend locally without needing a GPU, while the backend can be set up on a GPU-enabled machine. To install the frontend, run `pip install sglang`, and for the backend, use `pip install sglang[srt]`. This allows you to build SGLang programs locally and execute them by connecting to the remote backend.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment