index.rst 2.56 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
Welcome to vLLM!
================
Woosuk Kwon's avatar
Woosuk Kwon committed
3

Zhuohan Li's avatar
Zhuohan Li committed
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.. figure:: ./assets/logos/vllm-logo-text-light.png
  :width: 60%
  :align: center
  :alt: vLLM
  :class: no-scaled-link

.. raw:: html

   <p style="text-align:center">
   <strong>Easy, fast, and cheap LLM serving for everyone
   </strong>
   </p>

   <p style="text-align:center">
Woosuk Kwon's avatar
Woosuk Kwon committed
18
19
20
21
   <script async defer src="https://buttons.github.io/buttons.js"></script>
   <a class="github-button" href="https://github.com/vllm-project/vllm" data-show-count="true" data-size="large" aria-label="Star">Star</a>
   <a class="github-button" href="https://github.com/vllm-project/vllm/subscription" data-icon="octicon-eye" data-size="large" aria-label="Watch">Watch</a>
   <a class="github-button" href="https://github.com/vllm-project/vllm/fork" data-icon="octicon-repo-forked" data-size="large" aria-label="Fork">Fork</a>
Zhuohan Li's avatar
Zhuohan Li committed
22
23
24
25
   </p>



Woosuk Kwon's avatar
Woosuk Kwon committed
26
vLLM is a fast and easy-to-use library for LLM inference and serving.
Zhuohan Li's avatar
Zhuohan Li committed
27
28
29
30
31

vLLM is fast with:

* State-of-the-art serving throughput
* Efficient management of attention key and value memory with **PagedAttention**
32
* Continuous batching of incoming requests
33
* Quantization: `GPTQ <https://arxiv.org/abs/2210.17323>`_, `AWQ <https://arxiv.org/abs/2306.00978>`_, `SqueezeLLM <https://arxiv.org/abs/2306.07629>`_
Zhuohan Li's avatar
Zhuohan Li committed
34
35
36
37
38
39
40
41
42
* Optimized CUDA kernels

vLLM is flexible and easy to use with:

* Seamless integration with popular HuggingFace models
* High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
* Tensor parallelism support for distributed inference
* Streaming outputs
* OpenAI-compatible API server
43
* Support NVIDIA GPUs and AMD GPUs.
44

45
46
47
For more information, check out the following:

* `vLLM announcing blog post <https://vllm.ai>`_ (intro to PagedAttention)
Woosuk Kwon's avatar
Woosuk Kwon committed
48
* `vLLM paper <https://arxiv.org/abs/2309.06180>`_ (SOSP 2023)
49
50
* `How continuous batching enables 23x throughput in LLM inference while reducing p50 latency <https://www.anyscale.com/blog/continuous-batching-llm-inference>`_ by Cade Daniel et al.

51

Zhuohan Li's avatar
Zhuohan Li committed
52

Woosuk Kwon's avatar
Woosuk Kwon committed
53
54
55
56
57
58
59
60
Documentation
-------------

.. toctree::
   :maxdepth: 1
   :caption: Getting Started

   getting_started/installation
61
   getting_started/amd-installation
Woosuk Kwon's avatar
Woosuk Kwon committed
62
   getting_started/quickstart
Woosuk Kwon's avatar
Woosuk Kwon committed
63

64
65
66
67
68
.. toctree::
   :maxdepth: 1
   :caption: Serving

   serving/distributed_serving
69
   serving/run_on_sky
70
   serving/deploying_with_triton
Stephen Krider's avatar
Stephen Krider committed
71
   serving/deploying_with_docker
72
   serving/serving_with_langchain
73
   serving/metrics
74

Woosuk Kwon's avatar
Woosuk Kwon committed
75
76
77
78
79
80
.. toctree::
   :maxdepth: 1
   :caption: Models

   models/supported_models
   models/adding_model
81
   models/engine_args
82
83
84
85
86
87

.. toctree::
   :maxdepth: 1
   :caption: Quantization

   quantization/auto_awq