supported_models.rst 5.01 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
3
4
5
.. _supported_models:

Supported Models
================

6
vLLM supports a variety of generative Transformer models in `HuggingFace Transformers <https://huggingface.co/models>`_.
Woosuk Kwon's avatar
Woosuk Kwon committed
7
The following is the list of model architectures that are currently supported by vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
8
9
10
Alongside each architecture, we include some popular models that use it.

.. list-table::
11
  :widths: 25 25 50
Woosuk Kwon's avatar
Woosuk Kwon committed
12
13
14
15
  :header-rows: 1

  * - Architecture
    - Models
16
    - Example HuggingFace Models
17
  * - :code:`AquilaForCausalLM`
18
    - Aquila
19
    - :code:`BAAI/Aquila-7B`, :code:`BAAI/AquilaChat-7B`, etc.
Zhuohan Li's avatar
Zhuohan Li committed
20
  * - :code:`BaiChuanForCausalLM`
21
    - Baichuan
22
    - :code:`baichuan-inc/Baichuan2-13B-Chat`, :code:`baichuan-inc/Baichuan-7B`, etc.
23
24
25
  * - :code:`ChatGLMModel`
    - ChatGLM
    - :code:`THUDM/chatglm2-6b`, :code:`THUDM/chatglm3-6b`, etc.
26
27
28
  * - :code:`DeciLMForCausalLM`
    - DeciLM
    - :code:`Deci/DeciLM-7B`, :code:`Deci/DeciLM-7B-instruct`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
29
30
31
  * - :code:`BloomForCausalLM`
    - BLOOM, BLOOMZ, BLOOMChat
    - :code:`bigscience/bloom`, :code:`bigscience/bloomz`, etc.
Zhuohan Li's avatar
Zhuohan Li committed
32
33
  * - :code:`FalconForCausalLM`
    - Falcon
34
    - :code:`tiiuae/falcon-7b`, :code:`tiiuae/falcon-40b`, :code:`tiiuae/falcon-rw-7b`, etc.
35
36
37
  * - :code:`GemmaForCausalLM`
    - Gemma
    - :code:`google/gemma-2b`, :code:`google/gemma-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
38
39
  * - :code:`GPT2LMHeadModel`
    - GPT-2
40
    - :code:`gpt2`, :code:`gpt2-xl`, etc.
41
42
43
  * - :code:`GPTBigCodeForCausalLM`
    - StarCoder, SantaCoder, WizardCoder
    - :code:`bigcode/starcoder`, :code:`bigcode/gpt_bigcode-santacoder`, :code:`WizardLM/WizardCoder-15B-V1.0`, etc.
44
45
46
  * - :code:`GPTJForCausalLM`
    - GPT-J
    - :code:`EleutherAI/gpt-j-6b`, :code:`nomic-ai/gpt4all-j`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
47
48
  * - :code:`GPTNeoXForCausalLM`
    - GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM
49
    - :code:`EleutherAI/gpt-neox-20b`, :code:`EleutherAI/pythia-12b`, :code:`OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5`, :code:`databricks/dolly-v2-12b`, :code:`stabilityai/stablelm-tuned-alpha-7b`, etc.
50
51
52
  * - :code:`InternLMForCausalLM`
    - InternLM
    - :code:`internlm/internlm-7b`, :code:`internlm/internlm-chat-7b`, etc.
Fengzhe Zhou's avatar
Fengzhe Zhou committed
53
54
55
  * - :code:`InternLM2ForCausalLM`
    - InternLM2
    - :code:`internlm/internlm2-7b`, :code:`internlm/internlm2-chat-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
56
  * - :code:`LlamaForCausalLM`
57
58
    - LLaMA, LLaMA-2, Vicuna, Alpaca, Yi
    - :code:`meta-llama/Llama-2-13b-hf`, :code:`meta-llama/Llama-2-70b-hf`, :code:`openlm-research/open_llama_13b`, :code:`lmsys/vicuna-13b-v1.3`, :code:`01-ai/Yi-6B`, :code:`01-ai/Yi-34B`, etc.
59
60
61
  * - :code:`MistralForCausalLM`
    - Mistral, Mistral-Instruct
    - :code:`mistralai/Mistral-7B-v0.1`, :code:`mistralai/Mistral-7B-Instruct-v0.1`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
62
63
64
  * - :code:`MixtralForCausalLM`
    - Mixtral-8x7B, Mixtral-8x7B-Instruct
    - :code:`mistralai/Mixtral-8x7B-v0.1`, :code:`mistralai/Mixtral-8x7B-Instruct-v0.1`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
65
  * - :code:`MPTForCausalLM`
66
67
    - MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter
    - :code:`mosaicml/mpt-7b`, :code:`mosaicml/mpt-7b-storywriter`, :code:`mosaicml/mpt-30b`, etc.
Isotr0py's avatar
Isotr0py committed
68
69
70
  * - :code:`OLMoForCausalLM`
    - OLMo
    - :code:`allenai/OLMo-1B`, :code:`allenai/OLMo-7B`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
71
72
  * - :code:`OPTForCausalLM`
    - OPT, OPT-IML
73
    - :code:`facebook/opt-66b`, :code:`facebook/opt-iml-max-30b`, etc.
张大成's avatar
张大成 committed
74
75
76
  * - :code:`OrionForCausalLM`
    - Orion
    - :code:`OrionStarAI/Orion-14B-Base`, :code:`OrionStarAI/Orion-14B-Chat`, etc.
77
  * - :code:`PhiForCausalLM`
78
79
    - Phi
    - :code:`microsoft/phi-1_5`, :code:`microsoft/phi-2`, etc.
80
  * - :code:`QWenLMHeadModel`
81
82
    - Qwen
    - :code:`Qwen/Qwen-7B`, :code:`Qwen/Qwen-7B-Chat`, etc.
Junyang Lin's avatar
Junyang Lin committed
83
84
  * - :code:`Qwen2ForCausalLM`
    - Qwen2
85
    - :code:`Qwen/Qwen2-beta-7B`, :code:`Qwen/Qwen2-beta-7B-Chat`, etc.
86
  * - :code:`StableLMEpochForCausalLM`
Hyunsung Lee's avatar
Hyunsung Lee committed
87
88
    - StableLM
    - :code:`stabilityai/stablelm-3b-4e1t/` , :code:`stabilityai/stablelm-base-alpha-7b-v2`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
89

Woosuk Kwon's avatar
Woosuk Kwon committed
90
If your model uses one of the above model architectures, you can seamlessly run your model with vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
91
Otherwise, please refer to :ref:`Adding a New Model <adding_a_new_model>` for instructions on how to implement support for your model.
92
Alternatively, you can raise an issue on our `GitHub <https://github.com/vllm-project/vllm/issues>`_ project.
Woosuk Kwon's avatar
Woosuk Kwon committed
93

94
.. note::
95
    Currently, the ROCm version of vLLM supports Mistral and Mixtral only for context lengths up to 4096.
96

Woosuk Kwon's avatar
Woosuk Kwon committed
97
98
99
100
101
.. tip::
    The easiest way to check if your model is supported is to run the program below:

    .. code-block:: python

Woosuk Kwon's avatar
Woosuk Kwon committed
102
        from vllm import LLM
Woosuk Kwon's avatar
Woosuk Kwon committed
103
104
105
106
107

        llm = LLM(model=...)  # Name or path of your model
        output = llm.generate("Hello, my name is")
        print(output)

108
109
110
    If vLLM successfully generates text, it indicates that your model is supported.

.. tip::
111
    To use models from `ModelScope <https://www.modelscope.cn>`_ instead of HuggingFace Hub, set an environment variable:
112
113
114
115
116

    .. code-block:: shell

       $ export VLLM_USE_MODELSCOPE=True

117
118
    And use with :code:`trust_remote_code=True`.

119
120
121
122
123
124
125
    .. code-block:: python

        from vllm import LLM

        llm = LLM(model=..., revision=..., trust_remote_code=True)  # Name or path of your model
        output = llm.generate("Hello, my name is")
        print(output)