supported_models.rst 1.36 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
3
4
5
.. _supported_models:

Supported Models
================

6
vLLM supports a variety of generative Transformer models in `HuggingFace Transformers <https://huggingface.co/models>`_.
Woosuk Kwon's avatar
Woosuk Kwon committed
7
The following is the list of model architectures that are currently supported by vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
8
9
10
11
12
13
14
15
16
17
18
19
20
Alongside each architecture, we include some popular models that use it.

.. list-table::
  :widths: 25 75
  :header-rows: 1

  * - Architecture
    - Models
  * - :code:`GPT2LMHeadModel`
    - GPT-2
  * - :code:`GPTNeoXForCausalLM`
    - GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM
  * - :code:`LlamaForCausalLM`
21
    - LLaMA, Vicuna, Alpaca, Koala, Guanaco
Woosuk Kwon's avatar
Woosuk Kwon committed
22
23
24
  * - :code:`OPTForCausalLM`
    - OPT, OPT-IML

Woosuk Kwon's avatar
Woosuk Kwon committed
25
If your model uses one of the above model architectures, you can seamlessly run your model with vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
26
Otherwise, please refer to :ref:`Adding a New Model <adding_a_new_model>` for instructions on how to implement support for your model.
Woosuk Kwon's avatar
Woosuk Kwon committed
27
Alternatively, you can raise an issue on our `GitHub <https://github.com/WoosukKwon/vllm/issues>`_ project.
Woosuk Kwon's avatar
Woosuk Kwon committed
28
29
30
31
32
33

.. tip::
    The easiest way to check if your model is supported is to run the program below:

    .. code-block:: python

Woosuk Kwon's avatar
Woosuk Kwon committed
34
        from vllm import LLM
Woosuk Kwon's avatar
Woosuk Kwon committed
35
36
37
38
39

        llm = LLM(model=...)  # Name or path of your model
        output = llm.generate("Hello, my name is")
        print(output)

Woosuk Kwon's avatar
Woosuk Kwon committed
40
    If vLLM successfully generates text, it indicates that your model is supported.