supported_models.rst 2.72 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
3
4
5
.. _supported_models:

Supported Models
================

6
vLLM supports a variety of generative Transformer models in `HuggingFace Transformers <https://huggingface.co/models>`_.
Woosuk Kwon's avatar
Woosuk Kwon committed
7
The following is the list of model architectures that are currently supported by vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
8
9
10
Alongside each architecture, we include some popular models that use it.

.. list-table::
11
  :widths: 25 25 50
Woosuk Kwon's avatar
Woosuk Kwon committed
12
13
14
15
  :header-rows: 1

  * - Architecture
    - Models
16
    - Example HuggingFace Models
Zhuohan Li's avatar
Zhuohan Li committed
17
18
19
  * - :code:`BaiChuanForCausalLM`
    - Baichuan-7B
    - :code:`baichuan-inc/Baichuan-7B`.
Woosuk Kwon's avatar
Woosuk Kwon committed
20
21
22
  * - :code:`BloomForCausalLM`
    - BLOOM, BLOOMZ, BLOOMChat
    - :code:`bigscience/bloom`, :code:`bigscience/bloomz`, etc.
Zhuohan Li's avatar
Zhuohan Li committed
23
24
25
  * - :code:`FalconForCausalLM`
    - Falcon
    - :code:`tiiuae/falcon-7b``, :code:`tiiuae/falcon-40b`, :code:`tiiuae/falcon-rw-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
26
27
  * - :code:`GPT2LMHeadModel`
    - GPT-2
28
    - :code:`gpt2`, :code:`gpt2-xl`, etc.
29
30
31
  * - :code:`GPTBigCodeForCausalLM`
    - StarCoder, SantaCoder, WizardCoder
    - :code:`bigcode/starcoder`, :code:`bigcode/gpt_bigcode-santacoder`, :code:`WizardLM/WizardCoder-15B-V1.0`, etc.
32
33
34
  * - :code:`GPTJForCausalLM`
    - GPT-J
    - :code:`EleutherAI/gpt-j-6b`, :code:`nomic-ai/gpt4all-j`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
35
36
  * - :code:`GPTNeoXForCausalLM`
    - GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM
37
    - :code:`EleutherAI/gpt-neox-20b`, :code:`EleutherAI/pythia-12b`, :code:`OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5`, :code:`databricks/dolly-v2-12b`, :code:`stabilityai/stablelm-tuned-alpha-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
38
  * - :code:`LlamaForCausalLM`
Zhuohan Li's avatar
Zhuohan Li committed
39
40
    - LLaMA, LLaMA-2, Vicuna, Alpaca, Koala, Guanaco
    - :code:`meta-llama/Llama-2-13b-hf`, :code:`openlm-research/open_llama_13b`, :code:`lmsys/vicuna-13b-v1.3`, :code:`young-geng/koala`, :code:`JosephusCheung/Guanaco`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
41
  * - :code:`MPTForCausalLM`
42
43
    - MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter
    - :code:`mosaicml/mpt-7b`, :code:`mosaicml/mpt-7b-storywriter`, :code:`mosaicml/mpt-30b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
44
45
  * - :code:`OPTForCausalLM`
    - OPT, OPT-IML
46
    - :code:`facebook/opt-66b`, :code:`facebook/opt-iml-max-30b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
47

Woosuk Kwon's avatar
Woosuk Kwon committed
48
If your model uses one of the above model architectures, you can seamlessly run your model with vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
49
Otherwise, please refer to :ref:`Adding a New Model <adding_a_new_model>` for instructions on how to implement support for your model.
50
Alternatively, you can raise an issue on our `GitHub <https://github.com/vllm-project/vllm/issues>`_ project.
Woosuk Kwon's avatar
Woosuk Kwon committed
51
52
53
54
55
56

.. tip::
    The easiest way to check if your model is supported is to run the program below:

    .. code-block:: python

Woosuk Kwon's avatar
Woosuk Kwon committed
57
        from vllm import LLM
Woosuk Kwon's avatar
Woosuk Kwon committed
58
59
60
61
62

        llm = LLM(model=...)  # Name or path of your model
        output = llm.generate("Hello, my name is")
        print(output)

Woosuk Kwon's avatar
Woosuk Kwon committed
63
    If vLLM successfully generates text, it indicates that your model is supported.