README.md 6.22 KB
Newer Older
Lianmin Zheng's avatar
Lianmin Zheng committed
1
<div align="center" id="sglangtop">
Kushal Agrawal's avatar
Kushal Agrawal committed
2
<img src="https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png" alt="logo" width="400" margin="10px"></img>
Lianmin Zheng's avatar
Lianmin Zheng committed
3

Yineng Zhang's avatar
Yineng Zhang committed
4
5
6
7
8
[![PyPI](https://img.shields.io/pypi/v/sglang)](https://pypi.org/project/sglang)
![PyPI - Downloads](https://img.shields.io/pypi/dm/sglang)
[![license](https://img.shields.io/github/license/sgl-project/sglang.svg)](https://github.com/sgl-project/sglang/tree/main/LICENSE)
[![issue resolution](https://img.shields.io/github/issues-closed-raw/sgl-project/sglang)](https://github.com/sgl-project/sglang/issues)
[![open issues](https://img.shields.io/github/issues-raw/sgl-project/sglang)](https://github.com/sgl-project/sglang/issues)
Lianmin Zheng's avatar
Lianmin Zheng committed
9
[![](https://img.shields.io/badge/Gurubase-(experimental)-006BFF)](https://gurubase.io/g/sglang)
Yineng Zhang's avatar
Yineng Zhang committed
10

Yineng Zhang's avatar
Yineng Zhang committed
11
12
</div>

Lianmin Zheng's avatar
Lianmin Zheng committed
13
14
--------------------------------------------------------------------------------

Lianmin Zheng's avatar
Lianmin Zheng committed
15
| [**Blog**](https://lmsys.org/blog/2024-07-25-sglang-llama3/)
Yineng Zhang's avatar
Yineng Zhang committed
16
17
18
| [**Documentation**](https://docs.sglang.ai/)
| [**Join Slack**](https://slack.sglang.ai/)
| [**Join Bi-Weekly Development Meeting**](https://meeting.sglang.ai/)
Lianmin Zheng's avatar
Lianmin Zheng committed
19
| [**Roadmap**](https://github.com/sgl-project/sglang/issues/4042)
Lianmin Zheng's avatar
Lianmin Zheng committed
20
| [**Slides**](https://github.com/sgl-project/sgl-learning-materials?tab=readme-ov-file#slides) |
Lianmin Zheng's avatar
Lianmin Zheng committed
21

Lianmin Zheng's avatar
Lianmin Zheng committed
22
## News
Yineng Zhang's avatar
Yineng Zhang committed
23
24
25
- [2025/03] Supercharge DeepSeek-R1 Inference on AMD Instinct MI300X ([AMD blog](https://rocm.blogs.amd.com/artificial-intelligence/DeepSeekR1-Part2/README.html))
- [2025/03] SGLang Joins PyTorch Ecosystem: Efficient LLM Serving Engine ([PyTorch blog](https://pytorch.org/blog/sglang-joins-pytorch/))
- [2025/02] Unlock DeepSeek-R1 Inference Performance on AMD Instinct™ MI300X GPU ([AMD blog](https://rocm.blogs.amd.com/artificial-intelligence/DeepSeekR1_Perf/README.html))
Lianmin Zheng's avatar
Lianmin Zheng committed
26
- [2025/01] 🔥 SGLang provides day one support for DeepSeek V3/R1 models on NVIDIA and AMD GPUs with DeepSeek-specific optimizations. ([instructions](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3), [AMD blog](https://www.amd.com/en/developer/resources/technical-articles/amd-instinct-gpus-power-deepseek-v3-revolutionizing-ai-development-with-sglang.html), [10+ other companies](https://x.com/lmsysorg/status/1887262321636221412))
27
28
29
- [2024/12] 🔥 v0.4 Release: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs ([blog](https://lmsys.org/blog/2024-12-04-sglang-v0-4/)).
- [2024/09] v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision ([blog](https://lmsys.org/blog/2024-09-04-sglang-v0-3/)).
- [2024/07] v0.2 Release: Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) ([blog](https://lmsys.org/blog/2024-07-25-sglang-llama3/)).
Ying Sheng's avatar
Ying Sheng committed
30

Ying Sheng's avatar
Ying Sheng committed
31
32
33
<details>
<summary>More</summary>

34
- [2024/10] The First SGLang Online Meetup ([slides](https://github.com/sgl-project/sgl-learning-materials?tab=readme-ov-file#the-first-sglang-online-meetup)).
35
- [2024/02] SGLang enables **3x faster JSON decoding** with compressed finite state machine ([blog](https://lmsys.org/blog/2024-02-05-compressed-fsm/)).
Ying Sheng's avatar
Ying Sheng committed
36
- [2024/01] SGLang provides up to **5x faster inference** with RadixAttention ([blog](https://lmsys.org/blog/2024-01-17-sglang/)).
Ying Sheng's avatar
Ying Sheng committed
37
38
39
40
- [2024/01] SGLang powers the serving of the official **LLaVA v1.6** release demo ([usage](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#demo)).

</details>

Ying Sheng's avatar
Ying Sheng committed
41
42
43
44
45
## About
SGLang is a fast serving framework for large language models and vision language models.
It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language.
The core features include:

46
- **Fast Backend Runtime**: Provides efficient serving with RadixAttention for prefix caching, zero-overhead CPU scheduler, continuous batching, token attention (paged attention), speculative decoding, tensor parallelism, chunked prefill, structured outputs, quantization (FP8/INT4/AWQ/GPTQ), and multi-lora batching.
Ying Sheng's avatar
Ying Sheng committed
47
- **Flexible Frontend Language**: Offers an intuitive interface for programming LLM applications, including chained generation calls, advanced prompting, control flow, multi-modal inputs, parallelism, and external interactions.
48
- **Extensive Model Support**: Supports a wide range of generative models (Llama, Gemma, Mistral, QWen, DeepSeek, LLaVA, etc.), embedding models (e5-mistral, gte, mcdse) and reward models (Skywork), with easy extensibility for integrating new models.
Ying Sheng's avatar
Ying Sheng committed
49
50
- **Active Community**: SGLang is open-source and backed by an active community with industry adoption.

Chayenne's avatar
Chayenne committed
51
## Getting Started
Yineng Zhang's avatar
Yineng Zhang committed
52
- [Install SGLang](https://docs.sglang.ai/start/install.html)
Trayvon Pan's avatar
Trayvon Pan committed
53
- [Quick Start](https://docs.sglang.ai/backend/send_request.html)
Yineng Zhang's avatar
Yineng Zhang committed
54
55
56
- [Backend Tutorial](https://docs.sglang.ai/backend/openai_api_completions.html)
- [Frontend Tutorial](https://docs.sglang.ai/frontend/frontend.html)
- [Contribution Guide](https://docs.sglang.ai/references/contribution_guide.html)
Lianmin Zheng's avatar
Lianmin Zheng committed
57

58
## Benchmark and Performance
Lianmin Zheng's avatar
Lianmin Zheng committed
59
Learn more in the release blogs: [v0.2 blog](https://lmsys.org/blog/2024-07-25-sglang-llama3/), [v0.3 blog](https://lmsys.org/blog/2024-09-04-sglang-v0-3/), [v0.4 blog](https://lmsys.org/blog/2024-12-04-sglang-v0-4/)
Lianmin Zheng's avatar
Lianmin Zheng committed
60

Lianmin Zheng's avatar
Lianmin Zheng committed
61
## Roadmap
Yineng Zhang's avatar
Yineng Zhang committed
62
[Development Roadmap (2025 H1)](https://github.com/sgl-project/sglang/issues/4042)
Lianmin Zheng's avatar
Lianmin Zheng committed
63

Lianmin Zheng's avatar
Lianmin Zheng committed
64
## Adoption and Sponsorship
Lianmin Zheng's avatar
Lianmin Zheng committed
65
The project has been deployed to large-scale production, generating trillions of tokens every day.
66
It is supported by the following institutions: AMD, Atlas Cloud, Baseten, Cursor, DataCrunch, Etched, Hyperbolic, Iflytek, Jam & Tea Studios, LinkedIn, LMSYS, Meituan, Nebius, Novita AI, NVIDIA, Oracle, RunPod, Stanford, UC Berkeley, UCLA, xAI, and 01.AI.
Lianmin Zheng's avatar
Lianmin Zheng committed
67
68

<img src="https://raw.githubusercontent.com/sgl-project/sgl-learning-materials/main/slides/adoption.png" alt="logo" width="800" margin="10px"></img>
Lianmin Zheng's avatar
Lianmin Zheng committed
69

70
71
## Contact Us

Yineng Zhang's avatar
Yineng Zhang committed
72
For enterprises interested in adopting or deploying SGLang at scale, including technical consulting, sponsorship opportunities, or partnership inquiries, please contact us at contact@sglang.ai.
73

74
75
## Acknowledgment
We learned the design and reused code from the following projects: [Guidance](https://github.com/guidance-ai/guidance), [vLLM](https://github.com/vllm-project/vllm), [LightLLM](https://github.com/ModelTC/lightllm), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [Outlines](https://github.com/outlines-dev/outlines), and [LMQL](https://github.com/eth-sri/lmql).