README.md 3.52 KB
Newer Older
Zhuohan Li's avatar
Zhuohan Li committed
1
2
3
4
5
6
<p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/logos/vllm-logo-text-dark.png">
    <img alt="vLLM" src="./docs/source/assets/logos/vllm-logo-text-light.png" width=55%>
  </picture>
</p>
Woosuk Kwon's avatar
Woosuk Kwon committed
7

Zhuohan Li's avatar
Zhuohan Li committed
8
9
10
<h3 align="center">
Easy, fast, and cheap LLM serving for everyone
</h3>
Woosuk Kwon's avatar
Woosuk Kwon committed
11

Zhuohan Li's avatar
Zhuohan Li committed
12
<p align="center">
13
| <a href="https://vllm.readthedocs.io/en/latest/"><b>Documentation</b></a> | <a href=""><b>Blog</b></a> |
Woosuk Kwon's avatar
Woosuk Kwon committed
14

Zhuohan Li's avatar
Zhuohan Li committed
15
</p>
Woosuk Kwon's avatar
Woosuk Kwon committed
16

Zhuohan Li's avatar
Zhuohan Li committed
17
---
18

Zhuohan Li's avatar
Zhuohan Li committed
19
*Latest News* 🔥
20

Zhuohan Li's avatar
Zhuohan Li committed
21
22
23
- [2023/06] We officially released vLLM! vLLM has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid April. Check out our [blog post]().

---
24

Woosuk Kwon's avatar
Woosuk Kwon committed
25
vLLM is a fast and easy-to-use library for LLM inference and serving.
26

Zhuohan Li's avatar
Zhuohan Li committed
27
vLLM is fast with:
28

Zhuohan Li's avatar
Zhuohan Li committed
29
- State-of-the-art serving throughput
30
31
32
- Efficient management of attention key and value memory with **PagedAttention**
- Dynamic batching of incoming requests
- Optimized CUDA kernels
Zhuohan Li's avatar
Zhuohan Li committed
33
34
35
36
37

vLLM is flexible and easy to use with:

- Seamless integration with popular HuggingFace models
- High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
38
39
40
- Tensor parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
41

42
43
44
45
46
47
48
vLLM seamlessly supports many Huggingface models, including the following architectures:

- GPT-2 (e.g., `gpt2`, `gpt2-xl`, etc.)
- GPTNeoX (e.g., `EleutherAI/gpt-neox-20b`, `databricks/dolly-v2-12b`, `stabilityai/stablelm-tuned-alpha-7b`, etc.)
- LLaMA (e.g., `lmsys/vicuna-13b-v1.3`, `young-geng/koala`, `openlm-research/open_llama_13b`, etc.)
- OPT (e.g., `facebook/opt-66b`, `facebook/opt-iml-max-30b`, etc.)

49
Install vLLM with pip or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source):
Zhuohan Li's avatar
Zhuohan Li committed
50
51
52
53
54
55
56

```bash
pip install vllm
```

## Getting Started

57
58
59
60
Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to get started.
- [Installation](https://vllm.readthedocs.io/en/latest/getting_started/installation.html)
- [Quickstart](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html)
- [Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html)
Zhuohan Li's avatar
Zhuohan Li committed
61

62
## Performance
63

64
65
vLLM outperforms HuggingFace Transformers (HF) by up to 24x and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput.
For details, check out our [blog post]().
66

67
<p align="center">
Zhuohan Li's avatar
Zhuohan Li committed
68
69
70
71
72
73
74
75
  <picture>
  <source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/figures/perf_a10g_n1_dark.png">
  <img src="./docs/source/assets/figures/perf_a10g_n1_light.png" width="45%">
  </picture>
  <picture>
  <source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/figures/perf_a100_n1_dark.png">
  <img src="./docs/source/assets/figures/perf_a100_n1_light.png" width="45%">
  </picture>
76
77
78
  <br>
  <em> Serving throughput when each request asks for 1 output completion. </em>
</p>
79

80
<p align="center">
Zhuohan Li's avatar
Zhuohan Li committed
81
82
83
84
85
86
87
88
  <picture>
  <source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/figures/perf_a10g_n3_dark.png">
  <img src="./docs/source/assets/figures/perf_a10g_n3_light.png" width="45%">
  </picture>
  <picture>
  <source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/figures/perf_a100_n3_dark.png">
  <img src="./docs/source/assets/figures/perf_a100_n3_light.png" width="45%">
  </picture>  <br>
89
90
  <em> Serving throughput when each request asks for 3 output completions. </em>
</p>
91

92
## Contributing
93

94
95
We welcome and value any contributions and collaborations.
Please check out [CONTRIBUTING.md](./CONTRIBUTING.md) for how to get involved.