supported_models.rst 4.27 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
3
4
5
.. _supported_models:

Supported Models
================

6
vLLM supports a variety of generative Transformer models in `HuggingFace Transformers <https://huggingface.co/models>`_.
Woosuk Kwon's avatar
Woosuk Kwon committed
7
The following is the list of model architectures that are currently supported by vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
8
9
10
Alongside each architecture, we include some popular models that use it.

.. list-table::
11
  :widths: 25 25 50
Woosuk Kwon's avatar
Woosuk Kwon committed
12
13
14
15
  :header-rows: 1

  * - Architecture
    - Models
16
    - Example HuggingFace Models
17
  * - :code:`AquilaForCausalLM`
18
    - Aquila
19
    - :code:`BAAI/Aquila-7B`, :code:`BAAI/AquilaChat-7B`, etc.
Zhuohan Li's avatar
Zhuohan Li committed
20
  * - :code:`BaiChuanForCausalLM`
21
    - Baichuan
22
    - :code:`baichuan-inc/Baichuan2-13B-Chat`, :code:`baichuan-inc/Baichuan-7B`, etc.
23
24
25
  * - :code:`ChatGLMModel`
    - ChatGLM
    - :code:`THUDM/chatglm2-6b`, :code:`THUDM/chatglm3-6b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
26
27
28
  * - :code:`BloomForCausalLM`
    - BLOOM, BLOOMZ, BLOOMChat
    - :code:`bigscience/bloom`, :code:`bigscience/bloomz`, etc.
Zhuohan Li's avatar
Zhuohan Li committed
29
30
  * - :code:`FalconForCausalLM`
    - Falcon
31
    - :code:`tiiuae/falcon-7b`, :code:`tiiuae/falcon-40b`, :code:`tiiuae/falcon-rw-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
32
33
  * - :code:`GPT2LMHeadModel`
    - GPT-2
34
    - :code:`gpt2`, :code:`gpt2-xl`, etc.
35
36
37
  * - :code:`GPTBigCodeForCausalLM`
    - StarCoder, SantaCoder, WizardCoder
    - :code:`bigcode/starcoder`, :code:`bigcode/gpt_bigcode-santacoder`, :code:`WizardLM/WizardCoder-15B-V1.0`, etc.
38
39
40
  * - :code:`GPTJForCausalLM`
    - GPT-J
    - :code:`EleutherAI/gpt-j-6b`, :code:`nomic-ai/gpt4all-j`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
41
42
  * - :code:`GPTNeoXForCausalLM`
    - GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM
43
    - :code:`EleutherAI/gpt-neox-20b`, :code:`EleutherAI/pythia-12b`, :code:`OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5`, :code:`databricks/dolly-v2-12b`, :code:`stabilityai/stablelm-tuned-alpha-7b`, etc.
44
45
46
  * - :code:`InternLMForCausalLM`
    - InternLM
    - :code:`internlm/internlm-7b`, :code:`internlm/internlm-chat-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
47
  * - :code:`LlamaForCausalLM`
Zhuohan Li's avatar
Zhuohan Li committed
48
    - LLaMA, LLaMA-2, Vicuna, Alpaca, Koala, Guanaco
49
    - :code:`meta-llama/Llama-2-13b-hf`, :code:`meta-llama/Llama-2-70b-hf`, :code:`openlm-research/open_llama_13b`, :code:`lmsys/vicuna-13b-v1.3`, :code:`young-geng/koala`, etc.
50
51
52
  * - :code:`MistralForCausalLM`
    - Mistral, Mistral-Instruct
    - :code:`mistralai/Mistral-7B-v0.1`, :code:`mistralai/Mistral-7B-Instruct-v0.1`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
53
54
55
  * - :code:`MixtralForCausalLM`
    - Mixtral-8x7B, Mixtral-8x7B-Instruct
    - :code:`mistralai/Mixtral-8x7B-v0.1`, :code:`mistralai/Mixtral-8x7B-Instruct-v0.1`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
56
  * - :code:`MPTForCausalLM`
57
58
    - MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter
    - :code:`mosaicml/mpt-7b`, :code:`mosaicml/mpt-7b-storywriter`, :code:`mosaicml/mpt-30b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
59
60
  * - :code:`OPTForCausalLM`
    - OPT, OPT-IML
61
    - :code:`facebook/opt-66b`, :code:`facebook/opt-iml-max-30b`, etc.
62
  * - :code:`PhiForCausalLM`
63
64
    - Phi
    - :code:`microsoft/phi-1_5`, :code:`microsoft/phi-2`, etc.
65
  * - :code:`QWenLMHeadModel`
66
67
    - Qwen
    - :code:`Qwen/Qwen-7B`, :code:`Qwen/Qwen-7B-Chat`, etc.
68
69
70
  * - :code:`YiForCausalLM`
    - Yi
    - :code:`01-ai/Yi-6B`, :code:`01-ai/Yi-34B`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
71

Woosuk Kwon's avatar
Woosuk Kwon committed
72
If your model uses one of the above model architectures, you can seamlessly run your model with vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
73
Otherwise, please refer to :ref:`Adding a New Model <adding_a_new_model>` for instructions on how to implement support for your model.
74
Alternatively, you can raise an issue on our `GitHub <https://github.com/vllm-project/vllm/issues>`_ project.
Woosuk Kwon's avatar
Woosuk Kwon committed
75

76
.. note::
77
    Currently, the ROCm version of vLLM supports Mistral and Mixtral only for context lengths up to 4096.
78

Woosuk Kwon's avatar
Woosuk Kwon committed
79
80
81
82
83
.. tip::
    The easiest way to check if your model is supported is to run the program below:

    .. code-block:: python

Woosuk Kwon's avatar
Woosuk Kwon committed
84
        from vllm import LLM
Woosuk Kwon's avatar
Woosuk Kwon committed
85
86
87
88
89

        llm = LLM(model=...)  # Name or path of your model
        output = llm.generate("Hello, my name is")
        print(output)

90
91
92
93
    If vLLM successfully generates text, it indicates that your model is supported.

.. tip::
    To use models from `ModelScope <www.modelscope.cn>`_ instead of HuggingFace Hub, set an environment variable:
94
95
96
97
98

    .. code-block:: shell

       $ export VLLM_USE_MODELSCOPE=True

99
100
    And use with :code:`trust_remote_code=True`.

101
102
103
104
105
106
107
    .. code-block:: python

        from vllm import LLM

        llm = LLM(model=..., revision=..., trust_remote_code=True)  # Name or path of your model
        output = llm.generate("Hello, my name is")
        print(output)