supported_models.rst 3.24 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
2
3
4
5
.. _supported_models:

Supported Models
================

6
vLLM supports a variety of generative Transformer models in `HuggingFace Transformers <https://huggingface.co/models>`_.
Woosuk Kwon's avatar
Woosuk Kwon committed
7
The following is the list of model architectures that are currently supported by vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
8
9
10
Alongside each architecture, we include some popular models that use it.

.. list-table::
11
  :widths: 25 25 50
Woosuk Kwon's avatar
Woosuk Kwon committed
12
13
14
15
  :header-rows: 1

  * - Architecture
    - Models
16
    - Example HuggingFace Models
17
  * - :code:`AquilaForCausalLM`
18
    - Aquila
19
    - :code:`BAAI/Aquila-7B`, :code:`BAAI/AquilaChat-7B`, etc.
Zhuohan Li's avatar
Zhuohan Li committed
20
  * - :code:`BaiChuanForCausalLM`
21
    - Baichuan
Uranus's avatar
Uranus committed
22
    - :code:`baichuan-inc/Baichuan-7B`, :code:`baichuan-inc/Baichuan-13B-Chat`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
23
24
25
  * - :code:`BloomForCausalLM`
    - BLOOM, BLOOMZ, BLOOMChat
    - :code:`bigscience/bloom`, :code:`bigscience/bloomz`, etc.
Zhuohan Li's avatar
Zhuohan Li committed
26
27
28
  * - :code:`FalconForCausalLM`
    - Falcon
    - :code:`tiiuae/falcon-7b``, :code:`tiiuae/falcon-40b`, :code:`tiiuae/falcon-rw-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
29
30
  * - :code:`GPT2LMHeadModel`
    - GPT-2
31
    - :code:`gpt2`, :code:`gpt2-xl`, etc.
32
33
34
  * - :code:`GPTBigCodeForCausalLM`
    - StarCoder, SantaCoder, WizardCoder
    - :code:`bigcode/starcoder`, :code:`bigcode/gpt_bigcode-santacoder`, :code:`WizardLM/WizardCoder-15B-V1.0`, etc.
35
36
37
  * - :code:`GPTJForCausalLM`
    - GPT-J
    - :code:`EleutherAI/gpt-j-6b`, :code:`nomic-ai/gpt4all-j`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
38
39
  * - :code:`GPTNeoXForCausalLM`
    - GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM
40
    - :code:`EleutherAI/gpt-neox-20b`, :code:`EleutherAI/pythia-12b`, :code:`OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5`, :code:`databricks/dolly-v2-12b`, :code:`stabilityai/stablelm-tuned-alpha-7b`, etc.
41
42
43
  * - :code:`InternLMForCausalLM`
    - InternLM
    - :code:`internlm/internlm-7b`, :code:`internlm/internlm-chat-7b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
44
  * - :code:`LlamaForCausalLM`
Zhuohan Li's avatar
Zhuohan Li committed
45
    - LLaMA, LLaMA-2, Vicuna, Alpaca, Koala, Guanaco
46
    - :code:`meta-llama/Llama-2-13b-hf`, :code:`meta-llama/Llama-2-70b-hf`, :code:`openlm-research/open_llama_13b`, :code:`lmsys/vicuna-13b-v1.3`, :code:`young-geng/koala`, etc.
47
48
49
  * - :code:`MistralForCausalLM`
    - Mistral, Mistral-Instruct
    - :code:`mistralai/Mistral-7B-v0.1`, :code:`mistralai/Mistral-7B-Instruct-v0.1`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
50
  * - :code:`MPTForCausalLM`
51
52
    - MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter
    - :code:`mosaicml/mpt-7b`, :code:`mosaicml/mpt-7b-storywriter`, :code:`mosaicml/mpt-30b`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
53
54
  * - :code:`OPTForCausalLM`
    - OPT, OPT-IML
55
    - :code:`facebook/opt-66b`, :code:`facebook/opt-iml-max-30b`, etc.
56
  * - :code:`QWenLMHeadModel`
57
58
    - Qwen
    - :code:`Qwen/Qwen-7B`, :code:`Qwen/Qwen-7B-Chat`, etc.
Woosuk Kwon's avatar
Woosuk Kwon committed
59

Woosuk Kwon's avatar
Woosuk Kwon committed
60
If your model uses one of the above model architectures, you can seamlessly run your model with vLLM.
Woosuk Kwon's avatar
Woosuk Kwon committed
61
Otherwise, please refer to :ref:`Adding a New Model <adding_a_new_model>` for instructions on how to implement support for your model.
62
Alternatively, you can raise an issue on our `GitHub <https://github.com/vllm-project/vllm/issues>`_ project.
Woosuk Kwon's avatar
Woosuk Kwon committed
63
64
65
66
67
68

.. tip::
    The easiest way to check if your model is supported is to run the program below:

    .. code-block:: python

Woosuk Kwon's avatar
Woosuk Kwon committed
69
        from vllm import LLM
Woosuk Kwon's avatar
Woosuk Kwon committed
70
71
72
73
74

        llm = LLM(model=...)  # Name or path of your model
        output = llm.generate("Hello, my name is")
        print(output)

Woosuk Kwon's avatar
Woosuk Kwon committed
75
    If vLLM successfully generates text, it indicates that your model is supported.