README.md 5.51 KB
Newer Older
1
<div align="center"  id="sglangtop">
Kushal Agrawal's avatar
Kushal Agrawal committed
2
<img src="https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png" alt="logo" width="400" margin="10px"></img>
Lianmin Zheng's avatar
Lianmin Zheng committed
3

Yineng Zhang's avatar
Yineng Zhang committed
4
5
6
7
8
[![PyPI](https://img.shields.io/pypi/v/sglang)](https://pypi.org/project/sglang)
![PyPI - Downloads](https://img.shields.io/pypi/dm/sglang)
[![license](https://img.shields.io/github/license/sgl-project/sglang.svg)](https://github.com/sgl-project/sglang/tree/main/LICENSE)
[![issue resolution](https://img.shields.io/github/issues-closed-raw/sgl-project/sglang)](https://github.com/sgl-project/sglang/issues)
[![open issues](https://img.shields.io/github/issues-raw/sgl-project/sglang)](https://github.com/sgl-project/sglang/issues)
Lianmin Zheng's avatar
Lianmin Zheng committed
9
[![](https://img.shields.io/badge/Gurubase-(experimental)-006BFF)](https://gurubase.io/g/sglang)
Yineng Zhang's avatar
Yineng Zhang committed
10

Yineng Zhang's avatar
Yineng Zhang committed
11
12
</div>

Lianmin Zheng's avatar
Lianmin Zheng committed
13
14
--------------------------------------------------------------------------------

Lianmin Zheng's avatar
Lianmin Zheng committed
15
| [**Blog**](https://lmsys.org/blog/2024-07-25-sglang-llama3/)
Yineng Zhang's avatar
Yineng Zhang committed
16
17
18
| [**Documentation**](https://docs.sglang.ai/)
| [**Join Slack**](https://slack.sglang.ai/)
| [**Join Bi-Weekly Development Meeting**](https://meeting.sglang.ai/)
Lianmin Zheng's avatar
Lianmin Zheng committed
19
| [**Slides**](https://github.com/sgl-project/sgl-learning-materials?tab=readme-ov-file#slides) |
Lianmin Zheng's avatar
Lianmin Zheng committed
20

Lianmin Zheng's avatar
Lianmin Zheng committed
21
## News
Fidel González's avatar
Fidel González committed
22
- [2025/01] 🔥 SGLang provides day one support for DeepSeek V3/R1 models on NVIDIA and AMD GPUs with DeepSeek-specific optimizations. ([instructions](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3), [AMD blog](https://www.amd.com/en/developer/resources/technical-articles/amd-instinct-gpus-power-deepseek-v3-revolutionizing-ai-development-with-sglang.html))
23
24
25
- [2024/12] 🔥 v0.4 Release: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs ([blog](https://lmsys.org/blog/2024-12-04-sglang-v0-4/)).
- [2024/09] v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision ([blog](https://lmsys.org/blog/2024-09-04-sglang-v0-3/)).
- [2024/07] v0.2 Release: Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) ([blog](https://lmsys.org/blog/2024-07-25-sglang-llama3/)).
Ying Sheng's avatar
Ying Sheng committed
26

Ying Sheng's avatar
Ying Sheng committed
27
28
29
<details>
<summary>More</summary>

30
- [2024/10] The First SGLang Online Meetup ([slides](https://github.com/sgl-project/sgl-learning-materials?tab=readme-ov-file#the-first-sglang-online-meetup)).
31
- [2024/02] SGLang enables **3x faster JSON decoding** with compressed finite state machine ([blog](https://lmsys.org/blog/2024-02-05-compressed-fsm/)).
Ying Sheng's avatar
Ying Sheng committed
32
- [2024/01] SGLang provides up to **5x faster inference** with RadixAttention ([blog](https://lmsys.org/blog/2024-01-17-sglang/)).
Ying Sheng's avatar
Ying Sheng committed
33
34
35
36
- [2024/01] SGLang powers the serving of the official **LLaVA v1.6** release demo ([usage](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#demo)).

</details>

Ying Sheng's avatar
Ying Sheng committed
37
38
39
40
41
## About
SGLang is a fast serving framework for large language models and vision language models.
It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language.
The core features include:

Lianmin Zheng's avatar
Lianmin Zheng committed
42
- **Fast Backend Runtime**: Provides efficient serving with RadixAttention for prefix caching, jump-forward constrained decoding, overhead-free CPU scheduler, continuous batching, token attention (paged attention), tensor parallelism, FlashInfer kernels, chunked prefill, and quantization (FP8/INT4/AWQ/GPTQ).
Ying Sheng's avatar
Ying Sheng committed
43
- **Flexible Frontend Language**: Offers an intuitive interface for programming LLM applications, including chained generation calls, advanced prompting, control flow, multi-modal inputs, parallelism, and external interactions.
44
- **Extensive Model Support**: Supports a wide range of generative models (Llama, Gemma, Mistral, QWen, DeepSeek, LLaVA, etc.), embedding models (e5-mistral, gte, mcdse) and reward models (Skywork), with easy extensibility for integrating new models.
Ying Sheng's avatar
Ying Sheng committed
45
46
- **Active Community**: SGLang is open-source and backed by an active community with industry adoption.

Chayenne's avatar
Chayenne committed
47
## Getting Started
Yineng Zhang's avatar
Yineng Zhang committed
48
49
50
51
52
- [Install SGLang](https://docs.sglang.ai/start/install.html)
- [Quick Start](https://docs.sglang.ai/start/send_request.html)
- [Backend Tutorial](https://docs.sglang.ai/backend/openai_api_completions.html)
- [Frontend Tutorial](https://docs.sglang.ai/frontend/frontend.html)
- [Contribution Guide](https://docs.sglang.ai/references/contribution_guide.html)
Lianmin Zheng's avatar
Lianmin Zheng committed
53

54
## Benchmark and Performance
Lianmin Zheng's avatar
Lianmin Zheng committed
55
Learn more in the release blogs: [v0.2 blog](https://lmsys.org/blog/2024-07-25-sglang-llama3/), [v0.3 blog](https://lmsys.org/blog/2024-09-04-sglang-v0-3/), [v0.4 blog](https://lmsys.org/blog/2024-12-04-sglang-v0-4/)
Lianmin Zheng's avatar
Lianmin Zheng committed
56

Lianmin Zheng's avatar
Lianmin Zheng committed
57
## Roadmap
58
[Development Roadmap (2024 Q4)](https://github.com/sgl-project/sglang/issues/1487)
Lianmin Zheng's avatar
Lianmin Zheng committed
59

Lianmin Zheng's avatar
Lianmin Zheng committed
60
## Adoption and Sponsorship
61
The project is supported by (alphabetically): AMD, Baseten, Cursor, DataCrunch, Etched, Hyperbolic, Jam & Tea Studios, LinkedIn, LMSYS.org, Meituan, Nebius, Novita AI, NVIDIA, RunPod, Stanford, UC Berkeley, UCLA, xAI, 01.AI.
Lianmin Zheng's avatar
Lianmin Zheng committed
62

63
64
## Contact Us

Yineng Zhang's avatar
Yineng Zhang committed
65
For enterprises interested in adopting or deploying SGLang at scale, including technical consulting, sponsorship opportunities, or partnership inquiries, please contact us at contact@sglang.ai.
66

Lianmin Zheng's avatar
Lianmin Zheng committed
67
## Acknowledgment and Citation
Lianmin Zheng's avatar
Lianmin Zheng committed
68
We learned the design and reused code from the following projects: [Guidance](https://github.com/guidance-ai/guidance), [vLLM](https://github.com/vllm-project/vllm), [LightLLM](https://github.com/ModelTC/lightllm), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [Outlines](https://github.com/outlines-dev/outlines), and [LMQL](https://github.com/eth-sri/lmql). Please cite the paper, [SGLang: Efficient Execution of Structured Language Model Programs](https://arxiv.org/abs/2312.07104), if you find the project useful.