index.rst 2.42 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
Welcome to vLLM!
================
Woosuk Kwon's avatar
Woosuk Kwon committed
3

Zhuohan Li's avatar
Zhuohan Li committed
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.. figure:: ./assets/logos/vllm-logo-text-light.png
  :width: 60%
  :align: center
  :alt: vLLM
  :class: no-scaled-link

.. raw:: html

   <p style="text-align:center">
   <strong>Easy, fast, and cheap LLM serving for everyone
   </strong>
   </p>

   <p style="text-align:center">
Woosuk Kwon's avatar
Woosuk Kwon committed
18
19
20
21
   <script async defer src="https://buttons.github.io/buttons.js"></script>
   <a class="github-button" href="https://github.com/vllm-project/vllm" data-show-count="true" data-size="large" aria-label="Star">Star</a>
   <a class="github-button" href="https://github.com/vllm-project/vllm/subscription" data-icon="octicon-eye" data-size="large" aria-label="Watch">Watch</a>
   <a class="github-button" href="https://github.com/vllm-project/vllm/fork" data-icon="octicon-repo-forked" data-size="large" aria-label="Fork">Fork</a>
Zhuohan Li's avatar
Zhuohan Li committed
22
23
24
25
   </p>



Woosuk Kwon's avatar
Woosuk Kwon committed
26
vLLM is a fast and easy-to-use library for LLM inference and serving.
Zhuohan Li's avatar
Zhuohan Li committed
27
28
29
30
31

vLLM is fast with:

* State-of-the-art serving throughput
* Efficient management of attention key and value memory with **PagedAttention**
32
* Continuous batching of incoming requests
Zhuohan Li's avatar
Zhuohan Li committed
33
34
35
36
37
38
39
40
41
* Optimized CUDA kernels

vLLM is flexible and easy to use with:

* Seamless integration with popular HuggingFace models
* High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
* Tensor parallelism support for distributed inference
* Streaming outputs
* OpenAI-compatible API server
42
* Support NVIDIA CUDA and AMD ROCm.
43

44
45
46
For more information, check out the following:

* `vLLM announcing blog post <https://vllm.ai>`_ (intro to PagedAttention)
Woosuk Kwon's avatar
Woosuk Kwon committed
47
* `vLLM paper <https://arxiv.org/abs/2309.06180>`_ (SOSP 2023)
48
49
* `How continuous batching enables 23x throughput in LLM inference while reducing p50 latency <https://www.anyscale.com/blog/continuous-batching-llm-inference>`_ by Cade Daniel et al.

50

Zhuohan Li's avatar
Zhuohan Li committed
51

Woosuk Kwon's avatar
Woosuk Kwon committed
52
53
54
55
56
57
58
59
Documentation
-------------

.. toctree::
   :maxdepth: 1
   :caption: Getting Started

   getting_started/installation
60
   getting_started/amd-installation
Woosuk Kwon's avatar
Woosuk Kwon committed
61
   getting_started/quickstart
Woosuk Kwon's avatar
Woosuk Kwon committed
62

63
64
65
66
67
.. toctree::
   :maxdepth: 1
   :caption: Serving

   serving/distributed_serving
68
   serving/run_on_sky
69
   serving/deploying_with_triton
Stephen Krider's avatar
Stephen Krider committed
70
   serving/deploying_with_docker
71
   serving/serving_with_langchain
72
   serving/metrics
73

Woosuk Kwon's avatar
Woosuk Kwon committed
74
75
76
77
78
79
.. toctree::
   :maxdepth: 1
   :caption: Models

   models/supported_models
   models/adding_model
80
   models/engine_args
81
82
83
84
85
86

.. toctree::
   :maxdepth: 1
   :caption: Quantization

   quantization/auto_awq