index.rst 2.86 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
Welcome to vLLM!
================
Woosuk Kwon's avatar
Woosuk Kwon committed
3

Zhuohan Li's avatar
Zhuohan Li committed
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.. figure:: ./assets/logos/vllm-logo-text-light.png
  :width: 60%
  :align: center
  :alt: vLLM
  :class: no-scaled-link

.. raw:: html

   <p style="text-align:center">
   <strong>Easy, fast, and cheap LLM serving for everyone
   </strong>
   </p>

   <p style="text-align:center">
Woosuk Kwon's avatar
Woosuk Kwon committed
18
19
20
21
   <script async defer src="https://buttons.github.io/buttons.js"></script>
   <a class="github-button" href="https://github.com/vllm-project/vllm" data-show-count="true" data-size="large" aria-label="Star">Star</a>
   <a class="github-button" href="https://github.com/vllm-project/vllm/subscription" data-icon="octicon-eye" data-size="large" aria-label="Watch">Watch</a>
   <a class="github-button" href="https://github.com/vllm-project/vllm/fork" data-icon="octicon-repo-forked" data-size="large" aria-label="Fork">Fork</a>
Zhuohan Li's avatar
Zhuohan Li committed
22
23
24
25
   </p>



Woosuk Kwon's avatar
Woosuk Kwon committed
26
vLLM is a fast and easy-to-use library for LLM inference and serving.
Zhuohan Li's avatar
Zhuohan Li committed
27
28
29
30
31

vLLM is fast with:

* State-of-the-art serving throughput
* Efficient management of attention key and value memory with **PagedAttention**
32
* Continuous batching of incoming requests
33
* Fast model execution with CUDA/HIP graph
Zhuohan Li's avatar
Zhuohan Li committed
34
* Quantization: `GPTQ <https://arxiv.org/abs/2210.17323>`_, `AWQ <https://arxiv.org/abs/2306.00978>`_, `SqueezeLLM <https://arxiv.org/abs/2306.07629>`_, FP8 KV Cache
Zhuohan Li's avatar
Zhuohan Li committed
35
36
37
38
39
40
41
42
43
* Optimized CUDA kernels

vLLM is flexible and easy to use with:

* Seamless integration with popular HuggingFace models
* High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
* Tensor parallelism support for distributed inference
* Streaming outputs
* OpenAI-compatible API server
44
* Support NVIDIA GPUs and AMD GPUs
Zhuohan Li's avatar
Zhuohan Li committed
45
46
* (Experimental) Prefix caching support
* (Experimental) Multi-lora support
47

48
49
50
For more information, check out the following:

* `vLLM announcing blog post <https://vllm.ai>`_ (intro to PagedAttention)
Woosuk Kwon's avatar
Woosuk Kwon committed
51
* `vLLM paper <https://arxiv.org/abs/2309.06180>`_ (SOSP 2023)
52
53
* `How continuous batching enables 23x throughput in LLM inference while reducing p50 latency <https://www.anyscale.com/blog/continuous-batching-llm-inference>`_ by Cade Daniel et al.

54

Zhuohan Li's avatar
Zhuohan Li committed
55

Woosuk Kwon's avatar
Woosuk Kwon committed
56
57
58
59
60
61
62
63
Documentation
-------------

.. toctree::
   :maxdepth: 1
   :caption: Getting Started

   getting_started/installation
64
   getting_started/amd-installation
Woosuk Kwon's avatar
Woosuk Kwon committed
65
   getting_started/quickstart
Woosuk Kwon's avatar
Woosuk Kwon committed
66

67
68
69
70
71
.. toctree::
   :maxdepth: 1
   :caption: Serving

   serving/distributed_serving
72
   serving/run_on_sky
73
   serving/deploying_with_triton
Stephen Krider's avatar
Stephen Krider committed
74
   serving/deploying_with_docker
75
   serving/serving_with_langchain
76
   serving/metrics
77

Woosuk Kwon's avatar
Woosuk Kwon committed
78
79
80
81
82
83
.. toctree::
   :maxdepth: 1
   :caption: Models

   models/supported_models
   models/adding_model
84
   models/engine_args
85
86
87
88
89

.. toctree::
   :maxdepth: 1
   :caption: Quantization

90
91
92
93
94
95
96
97
98
99
100
101
102
   quantization/auto_awq

.. toctree::
   :maxdepth: 2
   :caption: Developer Documentation

   dev/engine/engine_index

Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`