Learn more about the argument specification, streaming, and multi-modal support [here](https://sgl-project.github.io/sampling_params.html).
### OpenAI Compatible API
## OpenAI Compatible API
In addition, the server supports OpenAI-compatible APIs.
```python
...
...
@@ -61,7 +61,7 @@ print(response)
It supports streaming, vision, and almost all features of the Chat/Completions/Models/Batch endpoints specified by the [OpenAI API Reference](https://platform.openai.com/docs/api-reference/).
### Additional Server Arguments
## Additional Server Arguments
- To enable multi-GPU tensor parallelism, add `--tp 2`. If it reports the error "peer access is not supported between these two devices", add `--enable-p2p-check` to the server launch command.
# Frontend: Structured Generation Language (SGLang)
The frontend language can be used with local models or API models. It is an alternative to the OpenAI API. You may found it easier to use for complex prompting workflow.
### Quick Start
## Quick Start
The example below shows how to use SGLang to answer a multi-turn question.
Anthropic and VertexAI (Gemini) models are also supported.
You can find more examples at [examples/quick_start](https://github.com/sgl-project/sglang/tree/main/examples/frontend_language/quick_start).
### Language Feature
## Language Feature
To begin with, import sglang.
```python
importsglangassgl
...
...
@@ -84,7 +84,7 @@ The system will manage the state, chat template, parallelism and batching for yo
The complete code for the examples below can be found at [readme_examples.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/readme_examples.py)
#### Control Flow
### Control Flow
You can use any Python code within the function body, including control flow, nested function calls, and external libraries.
```python
...
...
@@ -99,7 +99,7 @@ def tool_use(s, question):
s+="The key word to search is"+sgl.gen("word")
```
#### Parallelism
### Parallelism
Use `fork` to launch parallel prompts.
Because `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel.
See also [local_example_llava_next.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/quick_start/local_example_llava_next.py).
#### Constrained Decoding
### Constrained Decoding
Use `regex` to specify a regular expression as a decoding constraint.
Use `regex` to specify a JSON schema with a regular expression.
```python
...
...
@@ -177,7 +177,7 @@ def character_gen(s, name):
See also [json_decode.py](https://github.com/sgl-project/sglang/blob/main/examples/frontend_language/usage/json_decode.py) for an additional example of specifying formats with Pydantic models.
#### Batching
### Batching
Use `run_batch` to run a batch of requests with continuous batching.
```python
...
...
@@ -196,7 +196,7 @@ states = text_qa.run_batch(
)
```
#### Streaming
### Streaming
Add `stream=True` to enable streaming.
```python
...
...
@@ -215,7 +215,7 @@ for out in state.text_iter():
print(out,end="",flush=True)
```
#### Roles
### Roles
Use `sgl.system`, `sgl.user` and `sgl.assistant` to set roles when using Chat models. You can also define more complex role prompts using begin and end tokens.
...
...
@@ -233,6 +233,6 @@ def chat_example(s):
s+=sgl.assistant_end()
```
#### Tips and Implementation Details
### Tips and Implementation Details
- The `choices` argument in `sgl.gen` is implemented by computing the [token-length normalized log probabilities](https://blog.eleuther.ai/multiple-choice-normalization/) of all choices and selecting the one with the highest probability.
- The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex. It is compatible with `temperature=0` and `temperature != 0`.
Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions.
Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/installation.html) to install the proper version according to your PyTorch and CUDA versions.
### Method 3: Using docker
## Method 3: Using docker
The docker images are available on Docker Hub as [lmsysorg/sglang](https://hub.docker.com/r/lmsysorg/sglang/tags), built from [Dockerfile](https://github.com/sgl-project/sglang/tree/main/docker).
Replace `<secret>` below with your huggingface hub [token](https://huggingface.co/docs/hub/en/security-tokens).
2. Execute the command `docker compose up -d` in your terminal.
</details>
### Method 5: Run on Kubernetes or Clouds with SkyPilot
## Method 5: Run on Kubernetes or Clouds with SkyPilot
<details>
<summary>More</summary>
...
...
@@ -95,7 +95,7 @@ sky status --endpoint 30000 sglang
3. To further scale up your deployment with autoscaling and failure recovery, check out the [SkyServe + SGLang guide](https://github.com/skypilot-org/skypilot/tree/master/llm/sglang#serving-llama-2-with-sglang-for-more-traffic-using-skyserve).
</details>
### Common Notes
## Common Notes
-[FlashInfer](https://github.com/flashinfer-ai/flashinfer) is the default attention kernel backend. It only supports sm75 and above. If you encounter any FlashInfer-related issues on sm75+ devices (e.g., T4, A10, A100, L4, L40S, H100), please switch to other kernels by adding `--attention-backend triton --sampling-backend pytorch` and open an issue on GitHub.
- If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.
- The language frontend operates independently of the backend runtime. You can install the frontend locally without needing a GPU, while the backend can be set up on a GPU-enabled machine. To install the frontend, run `pip install sglang`, and for the backend, use `pip install sglang[srt]`. This allows you to build SGLang programs locally and execute them by connecting to the remote backend.