SGLang Model Gateway is a high-performance model-routing gateway for large-scale LLM deployments. It centralizes worker lifecycle management, balances traffic across heterogeneous protocols (HTTP, gRPC, OpenAI-compatible), and provides enterprise-ready control over history storage, MCP tooling, and privacy-sensitive workflows. The router is deeply optimized for the SGLang serving runtime, but can route to any OpenAI-compatible backend.
-**Unified control plane** for registering, monitoring, and orchestrating regular, prefill, and decode workers across heterogeneous model fleets.
-**Multi-protocol data plane** that routes traffic across HTTP, PD (prefill/decode), gRPC, and OpenAI-compatible backends with shared reliability primitives.
-**Industry-first gRPC pipeline** with native Rust tokenization, reasoning parsers, and tool-call execution for high-throughput, OpenAI-compatible serving; supports both single-stage and PD topologies.
-**Inference Gateway Mode (`--enable-igw`)** dynamically instantiates multiple router stacks (HTTP regular/PD, gRPC) and applies per-model policies for multi-tenant deployments.
-**Conversation & responses connectors** centralize chat history inside the router so the same context can be reused across models and MCP loops without leaking data to upstream vendors (memory, none, Oracle ATP).
-**Enterprise privacy**: agentic multi-turn `/v1/responses`, native MCP client (STDIO/HTTP/SSE/Streamable), and history storage all operate within the router boundary.
-**Reliability core**: retries with jitter, worker-scoped circuit breakers, token-bucket rate limiting with queuing, background health checks, and cache-aware load monitoring.
-**Observability**: Prometheus metrics, structured tracing, request ID propagation, and detailed job queue stats.
---
## Architecture
### Control Plane
-**Worker Manager** discovers capabilities (`/get_server_info`, `/get_model_info`), tracks load, and registers/removes workers in the shared registry.
-**Job Queue** serializes add/remove requests and exposes status (`/workers/{url}`) so clients can track onboarding progress.
-**Load Monitor** feeds cache-aware and power-of-two policies with live worker load statistics.
-**Health Checker** continuously probes workers and updates readiness, circuit breaker state, and router metrics.
-**gRPC router** streams tokenized requests directly to SRT gRPC workers, running fully in Rust—tokenizer, reasoning parser, and tool parser all reside in-process. Supports both single-stage and PD routing.
-**OpenAI router** proxies OpenAI-compatible endpoints to external vendors (OpenAI, xAI, etc.) while keeping chat history and multi-turn orchestration local.
### Storage & Privacy
- Conversation and response history is stored at the router tier (memory, none, or Oracle ATP). The same history can power multiple models or MCP loops without sending data to upstream vendors.
-`/v1/responses` agentic flows, MCP sessions, and conversation APIs share the same storage layer, enabling compliance for regulated workloads.
---
The SGLang Router is a high-performance request distribution system that routes inference requests across multiple SGLang runtime instances. It features cache-aware load balancing, fault tolerance, and support for advanced deployment patterns including data parallelism and prefill-decode disaggregation.
## Key Features
-**Cache-Aware Load Balancing**: Optimizes cache utilization while maintaining balanced load distribution
-**Multiple Routing Policies**: Choose from random, round-robin, cache-aware, or power-of-two policies
-**Fault Tolerance**: Automatic retry and circuit breaker mechanisms for resilient operation
-**Dynamic Scaling**: Add or remove workers at runtime without service interruption
-**Kubernetes Integration**: Native service discovery and pod management
-**Prefill-Decode Disaggregation**: Support for disaggregated serving load balancing
-**Prometheus Metrics**: Built-in observability and monitoring
-**Rate Limiter**: Token-bucket rate limiter to shield workers from overload
## Deployment Modes
## Installation
### Co-launch Router + Workers
Launch the router and a fleet of SGLang workers in one process (ideal for single-node or quick starts). The CLI accepts two namespaces of arguments:
-**Worker arguments** (no prefix) configure the SGLang runtime (`--model`, `--tp-size`, `--dp-size`, `--grpc-mode`, etc.).
-**Router arguments** are prefixed with `--router-` and map directly to `launch_router` flags (`--router-policy`, `--router-model-path`, `--router-log-level`, ...).
```bash
pip install sglang-router
python -m sglang_router.launch_server \
--model meta-llama/Meta-Llama-3.1-8B-Instruct \
--dp-size 4 \
--host 0.0.0.0 \
--port 30000
```
## Quick Start
To see all available options:
Comprehensive example:
```bash
python -m sglang_router.launch_server --help# Co-launch router and workers
python -m sglang_router.launch_router --help# Launch router only
The router supports three primary deployment patterns:
1.**Co-launch Mode**: Router and workers launch together (simplest for single-node deployments)
2.**Separate Launch Mode**: Router and workers launch independently (best for multi-node setups)
3.**Prefill-Decode Disaggregation**: Specialized mode for disaggregated serving
### Mode 1: Co-launch Router and Workers
This mode launches both the router and multiple worker instances in a single command. It's the simplest deployment option and replaces the `--dp-size` argument of SGLang Runtime.
### Separate Launch (HTTP)
Run workers independently and point the router at their HTTP endpoints.
--policy cache_aware # or random, round_robin, power_of_two
--worker-urls grpc://127.0.0.1:20000 \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--reasoning-parser deepseek-r1 \
--tool-call-parser json \
--host 0.0.0.0 --port 8080
```
### Mode 3: Prefill-Decode Disaggregation
> gRPC router supports both single-stage and PD serving. Provide `--tokenizer-path` or `--model-path` (HF repo or local directory) plus optional `--chat-template`.
This advanced mode separates prefill and decode operations for optimized performance:
### Prefill/Decode Disaggregation
Split prefill and decode workers for PD-aware caching and balancing.
```bash
python -m sglang_router.launch_router \
--pd-disaggregation\
--prefill http://prefill1:8000 9000 \
--prefill http://prefill2:8001 9001 \
--decode http://decode1:8002 \
--decode http://decode2:8003 \
--prefill-policy cache_aware \
--decode-policy round_robin
--pd-disaggregation\
--prefill http://prefill1:30001 9001 \
--decode http://decode1:30011 \
--policy cache_aware \
--prefill-policy cache_aware \
--decode-policy power_of_two
```
#### Understanding --prefill Arguments
The `--prefill` flag accepts URLs with optional bootstrap ports:
-`--prefill http://server:8000` - No bootstrap port
-`--prefill http://server:8000 9000` - Bootstrap port 9000
-`--prefill http://server:8000 none` - Explicitly no bootstrap port
#### Policy Inheritance in PD Mode
The router intelligently handles policy configuration for prefill and decode nodes:
### OpenAI Backend Proxy
Proxy OpenAI-compatible endpoints (OpenAI, xAI, etc.) while keeping history and MCP sessions local.
1.**Only `--policy` specified**: Both prefill and decode nodes use this policy
2.**`--policy` and `--prefill-policy` specified**: Prefill nodes use `--prefill-policy`, decode nodes use `--policy`
3.**`--policy` and `--decode-policy` specified**: Prefill nodes use `--policy`, decode nodes use `--decode-policy`
4.**All three specified**: Prefill nodes use `--prefill-policy`, decode nodes use `--decode-policy` (main `--policy` is ignored)
Example with mixed policies:
```bash
python -m sglang_router.launch_router \
--pd-disaggregation\
--prefill http://prefill1:8000
--prefill http://prefill2:8000 \
--decode http://decode1:8001
--decode http://decode2:8001 \
--policy round_robin \
--prefill-policy cache_aware # Prefill uses cache_aware and decode uses round_robin from --policy
--backend openai \
--worker-urls https://api.openai.com \
--api-key"$OPENAI_API_KEY"\
--history-backend memory
```
#### PD Mode with Service Discovery
For Kubernetes deployments with separate prefill and decode server pools:
```bash
python -m sglang_router.launch_router \
--pd-disaggregation\
--service-discovery\
--prefill-selectorapp=prefill-server tier=gpu \
--decode-selectorapp=decode-server tier=cpu \
--service-discovery-namespace production \
--prefill-policy cache_aware \
--decode-policy round_robin
```
> OpenAI backend mode expects exactly one `--worker-urls` entry per router instance.
## Dynamic Scaling
---
The router supports runtime scaling through REST APIs:
## Worker Lifecycle & Dynamic Scaling
### Adding Workers
Add or remove workers at runtime using the REST APIs. Jobs are queued and tracked for eventual consistency.
**Note**: When using cache-aware routing, removed workers are cleanly evicted from the routing tree and request queues.
Legacy endpoints (`/add_worker`, `/remove_worker`, `/list_workers`) remain available but will be deprecated. `/workers/{url}` returns both registry data and queued job status.
## Fault Tolerance
---
The router includes comprehensive fault tolerance mechanisms:
- Worker is marked unhealthy after `cb-failure-threshold` consecutive failures
- Returns to service after `cb-success-threshold` successful health checks
- Circuit breaker can be disabled with `--disable-circuit-breaker`
### Rate Limiter
Requests beyond the concurrency limit wait in a FIFO queue (up to `queue-size`). A `429` is returned when the queue is full; `408` is returned when `queue-timeout-secs` expires.
Use the token-bucket rate limiter to cap requests before they overwhelm downstream workers.
---
- Enable rate limiting by setting `--max-concurrent-requests` to a positive integer. A bucket with that many tokens (concurrent leases) is created; `-1` keeps it disabled.
- Optionally override the refill rate with `--rate-limit-tokens-per-second`. If omitted, the refill rate matches `max-concurrent-requests`.
- Overflow traffic can wait in a FIFO queue controlled by:
-`--queue-size`: pending-request buffer (0 disables queuing; defaults to 100).
-`--queue-timeout-secs`: maximum wait time for queued requests before returning `429` (defaults to 60 seconds).
This configuration allows up to 256 concurrent requests, refills 512 tokens (requests) per second, and keeps up to 128 overflow requests queued for 30 seconds before timing out.
**Responses**:
- Returns **429** when the router cannot enqueue the request (queue disabled or full).
- Returns **408** when a queued request waits longer than `--queue-timeout-secs` or no token becomes available before the timeout.
---
## Routing Policies
## Service Discovery (Kubernetes)
The router supports multiple routing strategies:
### 1. Random Routing
Distributes requests randomly across workers.
Enable automatic worker discovery via Kubernetes pod selectors.
```bash
--policy random
python -m sglang_router.launch_router \
--service-discovery\
--selectorapp=sglang-worker role=inference \
--service-discovery-namespace production \
--service-discovery-port 8000
```
### 2. Round-Robin Routing
Cycles through workers in order.
PD deployments can specify `--prefill-selector` and `--decode-selector` plus the `sglang.ai/bootstrap-port` annotation for prefill bootstrap ports. Ensure RBAC grants `get/list/watch` on pods.
```bash
--policy round_robin
```
---
### 3. Power of Two Choices
Samples two workers and routes to the less loaded one.
## Security & Authentication
```bash
--policy power_of_two
```
-**Router API key (`--api-key`)**: clients must supply `Authorization: Bearer <key>`.
-**Worker API keys**: when adding workers dynamically, include `api_key` in the payload; workers listed via CLI inherit the router key.
-**Full-stack auth**: start router with `--api-key`, then add workers with their own keys:
-**Privacy**: All conversation history, `/v1/responses` state, and MCP sessions stay inside the router. Nothing is persisted at remote model vendors unless explicitly proxied.
### 4. Cache-Aware Load Balancing (Default)
---
The most sophisticated policy that combines cache optimization with load balancing:
| `--cors-allowed-origins` | list | [] | Allowed CORS origins |
## Advanced Features
### Kubernetes Service Discovery
Automatically discover and manage workers in Kubernetes:
#### Standard Mode
```bash
python -m sglang_router.launch_router \
--service-discovery\
--selectorapp=sglang-worker env=prod \
--service-discovery-namespace production \
--service-discovery-port 8000
```
#### Prefill-Decode Disaggregation Mode
```bash
python -m sglang_router.launch_router \
--pd-disaggregation\
--service-discovery\
--prefill-selectorapp=prefill-server env=prod \
--decode-selectorapp=decode-server env=prod \
--service-discovery-namespace production
```
**Note**: The `--bootstrap-port-annotation` (default: `sglang.ai/bootstrap-port`) is used to discover bootstrap ports for prefill servers in PD mode. Prefill pods should have this annotation set to their bootstrap port value.
| `sgl_router_requests_total` | Counter | Total number of requests received by the router's API endpoint. Useful for tracking overall traffic. |
| `sgl_router_processed_requests_total` | Counter | Total requests processed, labeled by `worker`. Critical for spotting load imbalances. |
| `sgl_router_active_workers` | Gauge | The current number of healthy workers in the routing pool. Essential for alerting. |
| `sgl_router_running_requests` | Gauge | The number of currently in-flight requests, labeled by `worker`. For monitoring real-time load. |
| `sgl_router_cache_hits_total` | Counter | Total requests routed to a worker with a matching prefix cache. |
| `sgl_router_cache_misses_total` | Counter | Total requests that could not be routed based on cache locality. |
| `sgl_router_generate_duration_seconds` | Histogram | Tracks end-to-end request latency. Use this to monitor performance (e.g., p95/p99). |
---
## Troubleshooting
### Common Issues
1.**Workers never ready**
Increase `--worker-startup-timeout-secs` or ensure health probes respond before router startup.
1.**Workers not connecting**: Ensure workers are fully initialized before starting the router. Use `--worker-startup-timeout-secs` to increase wait time.
2.**Load imbalance / hot workers**
Inspect `sgl_router_processed_requests_total` and tune cache-aware thresholds (`--balance-*`, `--cache-threshold`).
2.**High latency**:
-**A common cause**: Load Imbalanced.
- Check the `sgl_router_processed_requests_total` metric grouped by `worker`.
- Cache-aware routing might be prioritizing cache hits too aggressively.
- Try adjusting `--balance-abs-threshold` and `--balance-rel-threshold`.
3.**Circuit breaker flapping**
Increase `--cb-failure-threshold` or extend the timeout/window durations. Consider temporarily disabling retries.
3.**Memory growth**: Reduce `--max-tree-size` or decrease `--eviction-interval-secs` for more aggressive cache cleanup.
SGLang Model Gateway continues to evolve alongside the SGLang runtime. Keep CLI flags, integrations, and documentation aligned when adopting new features or contributing improvements.