installation.rst 1.04 KB
Newer Older
Zhuohan Li's avatar
Zhuohan Li committed
1
2
.. _installation:

Woosuk Kwon's avatar
Woosuk Kwon committed
3
4
5
Installation
============

6
vLLM is a Python library that also contains pre-compiled C++ and CUDA (11.8) binaries.
7
8
9

Requirements
------------
10
11

* OS: Linux
12
* Python: 3.8 -- 3.11
13
* GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, etc.)
14
15
16
17

Install with pip
----------------

Woosuk Kwon's avatar
Woosuk Kwon committed
18
You can install vLLM using pip:
19
20
21
22
23
24
25

.. code-block:: console

    $ # (Optional) Create a new conda environment.
    $ conda create -n myenv python=3.8 -y
    $ conda activate myenv

Woosuk Kwon's avatar
Woosuk Kwon committed
26
    $ # Install vLLM.
27
    $ pip install vllm
28
29
30
31


.. _build_from_source:

Woosuk Kwon's avatar
Woosuk Kwon committed
32
33
34
Build from source
-----------------

35
You can also build and install vLLM from source:
36

Woosuk Kwon's avatar
Woosuk Kwon committed
37
38
.. code-block:: console

39
    $ git clone https://github.com/vllm-project/vllm.git
Woosuk Kwon's avatar
Woosuk Kwon committed
40
    $ cd vllm
41
    $ pip install -e .  # This may take 5-10 minutes.
42
43
44
45
46
47
48
49

.. tip::
    If you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image.

    .. code-block:: console

        $ # Pull the Docker image with CUDA 11.8.
        $ docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/pytorch:22.12-py3