37 KiB
(supported-models)=
Supported Models
vLLM supports generative and pooling models across various tasks.
If a model supports more than one task, you can set the task via the --task
argument.
For each task, we list the model architectures that have been implemented in vLLM. Alongside each architecture, we include some popular models that use it.
Model Implementation
vLLM
If vLLM natively supports a model, its implementation can be found in gh-file:vllm/model_executor/models.
These models are what we list in project:#supported-text-models and project:#supported-mm-models.
(transformers-backend)=
Transformers
vLLM also supports model implementations that are available in Transformers. This does not currently work for all models, but most decoder language models are supported, and vision language model support is planned!
To check if the modeling backend is Transformers, you can simply do this:
from vllm import LLM
llm = LLM(model=..., task="generate") # Name or path of your model
llm.apply_model(lambda model: print(type(model)))
If it is TransformersForCausalLM
then it means it's based on Transformers!
:::{tip}
You can force the use of TransformersForCausalLM
by setting model_impl="transformers"
for project:#offline-inference or --model-impl transformers
for the project:#openai-compatible-server.
:::
:::{note} vLLM may not fully optimise the Transformers implementation so you may see degraded performance if comparing a native model to a Transformers model in vLLM. :::
Supported features
The Transformers modeling backend explicitly supports the following features:
Remote Code
If your model is neither supported natively by vLLM or Transformers, you can still run it in vLLM!
Simply set trust_remote_code=True
and vLLM will run any model on the Model Hub that is compatible with Transformers.
Provided that the model writer implements their model in a compatible way, this means that you can run new models before they are officially supported in Transformers or vLLM!
:::{tip} If you have not yet created your custom model, you can follow this guide on customising models in Transformers. :::
from vllm import LLM
llm = LLM(model=..., task="generate", trust_remote_code=True) # Name or path of your model
llm.apply_model(lambda model: print(model.__class__))
To make your model compatible with the Transformers backend, it needs:
:caption: modeling_my_model.py
from transformers import PreTrainedModel
from torch import nn
class MyAttention(nn.Module):
def forward(self, hidden_states, **kwargs): # <- kwargs are required
...
attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
attn_output, attn_weights = attention_interface(
self,
query_states,
key_states,
value_states,
**kwargs,
)
...
class MyModel(PreTrainedModel):
_supports_attention_backend = True
Here is what happens in the background:
- The config is loaded
MyModel
Python class is loaded from theauto_map
, and we check that the model_supports_attention_backend
.- The
TransformersForCausalLM
backend is used. See gh-file:vllm/model_executor/models/transformers.py, which leverageself.config._attn_implementation = "vllm"
, thus the need to useALL_ATTENTION_FUNCTION
.
That's it!
For your model to be compatible with vLLM's tensor parallel and/or pipeline parallel features, you must add base_model_tp_plan
and/or base_model_pp_plan
to your model's config class:
:caption: configuration_my_model.py
from transformers import PretrainedConfig
class MyConfig(PretrainedConfig):
base_model_tp_plan = {
"layers.*.self_attn.k_proj": "colwise",
"layers.*.self_attn.v_proj": "colwise",
"layers.*.self_attn.o_proj": "rowwise",
"layers.*.mlp.gate_proj": "colwise",
"layers.*.mlp.up_proj": "colwise",
"layers.*.mlp.down_proj": "rowwise",
}
base_model_pp_plan = {
"embed_tokens": (["input_ids"], ["inputs_embeds"]),
"layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
"norm": (["hidden_states"], ["hidden_states"]),
}
base_model_tp_plan
is adict
that maps fully qualified layer name patterns to tensor parallel styles (currently only"colwise"
and"rowwise"
are supported).base_model_pp_plan
is adict
that maps direct child layer names totuple
s oflist
s ofstr
s:- You only need to do this for layers which are not present on all pipeline stages
- vLLM assumes that there will be only one
nn.ModuleList
, which is distributed across the pipeline stages - The
list
in the first element of thetuple
contains the names of the input arguments - The
list
in the last element of thetuple
contains the names of the variables the layer outputs to in your modeling code
Loading a Model
Hugging Face Hub
By default, vLLM loads models from Hugging Face (HF) Hub.
To determine whether a given model is natively supported, you can check the config.json
file inside the HF repository.
If the "architectures"
field contains a model architecture listed below, then it should be natively supported.
Models do not need to be natively supported to be used in vLLM. The Transformers backend enables you to run models directly using their Transformers implementation (or even remote code on the Hugging Face Model Hub!).
:::{tip} The easiest way to check if your model is really supported at runtime is to run the program below:
from vllm import LLM
# For generative models (task=generate) only
llm = LLM(model=..., task="generate") # Name or path of your model
output = llm.generate("Hello, my name is")
print(output)
# For pooling models (task={embed,classify,reward,score}) only
llm = LLM(model=..., task="embed") # Name or path of your model
output = llm.encode("Hello, my name is")
print(output)
If vLLM successfully returns text (for generative models) or hidden states (for pooling models), it indicates that your model is supported. :::
Otherwise, please refer to Adding a New Model for instructions on how to implement your model in vLLM. Alternatively, you can open an issue on GitHub to request vLLM support.
Using a proxy
Here are some tips for loading/downloading models from Hugging Face using a proxy:
- Set the proxy globally for your session (or set it in the profile file):
export http_proxy=http://your.proxy.server:port
export https_proxy=http://your.proxy.server:port
- Set the proxy for just the current command:
https_proxy=http://your.proxy.server:port huggingface-cli download <model_name>
# or use vllm cmd directly
https_proxy=http://your.proxy.server:port vllm serve <model_name> --disable-log-requests
- Set the proxy in Python interpreter:
import os
os.environ['http_proxy'] = 'http://your.proxy.server:port'
os.environ['https_proxy'] = 'http://your.proxy.server:port'
ModelScope
To use models from ModelScope instead of Hugging Face Hub, set an environment variable:
export VLLM_USE_MODELSCOPE=True
And use with trust_remote_code=True
.
from vllm import LLM
llm = LLM(model=..., revision=..., task=..., trust_remote_code=True)
# For generative models (task=generate) only
output = llm.generate("Hello, my name is")
print(output)
# For pooling models (task={embed,classify,reward,score}) only
output = llm.encode("Hello, my name is")
print(output)
(supported-text-models)=
List of Text-only Language Models
Generative Models
See this page for more information on how to use generative models.
Text Generation (--task generate
)
:::{list-table} :widths: 25 25 50 5 5 :header-rows: 1
-
AquilaForCausalLM
- Aquila, Aquila2
BAAI/Aquila-7B
,BAAI/AquilaChat-7B
, etc.- ✅︎
- ✅︎
-
ArcticForCausalLM
- Arctic
Snowflake/snowflake-arctic-base
,Snowflake/snowflake-arctic-instruct
, etc.- ✅︎
-
BaiChuanForCausalLM
- Baichuan2, Baichuan
baichuan-inc/Baichuan2-13B-Chat
,baichuan-inc/Baichuan-7B
, etc.- ✅︎
- ✅︎
-
BambaForCausalLM
- Bamba
ibm-ai-platform/Bamba-9B-fp8
,ibm-ai-platform/Bamba-9B
-
BloomForCausalLM
- BLOOM, BLOOMZ, BLOOMChat
bigscience/bloom
,bigscience/bloomz
, etc.- ✅︎
-
BartForConditionalGeneration
- BART
facebook/bart-base
,facebook/bart-large-cnn
, etc.
-
ChatGLMModel
,ChatGLMForConditionalGeneration
- ChatGLM
THUDM/chatglm2-6b
,THUDM/chatglm3-6b
,ShieldLM-6B-chatglm3
, etc.- ✅︎
- ✅︎
-
CohereForCausalLM
,Cohere2ForCausalLM
- Command-R
CohereForAI/c4ai-command-r-v01
,CohereForAI/c4ai-command-r7b-12-2024
, etc.- ✅︎
- ✅︎
-
DbrxForCausalLM
- DBRX
databricks/dbrx-base
,databricks/dbrx-instruct
, etc.- ✅︎
-
DeciLMForCausalLM
- DeciLM
nvidia/Llama-3_3-Nemotron-Super-49B-v1
, etc.- ✅︎
-
DeepseekForCausalLM
- DeepSeek
deepseek-ai/deepseek-llm-67b-base
,deepseek-ai/deepseek-llm-7b-chat
etc.- ✅︎
-
DeepseekV2ForCausalLM
- DeepSeek-V2
deepseek-ai/DeepSeek-V2
,deepseek-ai/DeepSeek-V2-Chat
etc.- ✅︎
-
DeepseekV3ForCausalLM
- DeepSeek-V3
deepseek-ai/DeepSeek-V3-Base
,deepseek-ai/DeepSeek-V3
etc.- ✅︎
-
ExaoneForCausalLM
- EXAONE-3
LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct
, etc.- ✅︎
- ✅︎
-
FalconForCausalLM
- Falcon
tiiuae/falcon-7b
,tiiuae/falcon-40b
,tiiuae/falcon-rw-7b
, etc.- ✅︎
-
FalconMambaForCausalLM
- FalconMamba
tiiuae/falcon-mamba-7b
,tiiuae/falcon-mamba-7b-instruct
, etc.- ✅︎
- ✅︎
-
GemmaForCausalLM
- Gemma
google/gemma-2b
,google/gemma-7b
, etc.- ✅︎
- ✅︎
-
Gemma2ForCausalLM
- Gemma 2
google/gemma-2-9b
,google/gemma-2-27b
, etc.- ✅︎
- ✅︎
-
Gemma3ForCausalLM
- Gemma 3
google/gemma-3-1b-it
, etc.- ✅︎
- ✅︎
-
GlmForCausalLM
- GLM-4
THUDM/glm-4-9b-chat-hf
, etc.- ✅︎
- ✅︎
-
Glm4ForCausalLM
- GLM-4-0414
THUDM/GLM-4-32B-Chat-0414
, etc.- ✅︎
- ✅︎
-
GPT2LMHeadModel
- GPT-2
gpt2
,gpt2-xl
, etc.- ✅︎
-
GPTBigCodeForCausalLM
- StarCoder, SantaCoder, WizardCoder
bigcode/starcoder
,bigcode/gpt_bigcode-santacoder
,WizardLM/WizardCoder-15B-V1.0
, etc.- ✅︎
- ✅︎
-
GPTJForCausalLM
- GPT-J
EleutherAI/gpt-j-6b
,nomic-ai/gpt4all-j
, etc.- ✅︎
-
GPTNeoXForCausalLM
- GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM
EleutherAI/gpt-neox-20b
,EleutherAI/pythia-12b
,OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
,databricks/dolly-v2-12b
,stabilityai/stablelm-tuned-alpha-7b
, etc.- ✅︎
-
GraniteForCausalLM
- Granite 3.0, Granite 3.1, PowerLM
ibm-granite/granite-3.0-2b-base
,ibm-granite/granite-3.1-8b-instruct
,ibm/PowerLM-3b
, etc.- ✅︎
- ✅︎
-
GraniteMoeForCausalLM
- Granite 3.0 MoE, PowerMoE
ibm-granite/granite-3.0-1b-a400m-base
,ibm-granite/granite-3.0-3b-a800m-instruct
,ibm/PowerMoE-3b
, etc.- ✅︎
- ✅︎
-
GraniteMoeSharedForCausalLM
- Granite MoE Shared
ibm-research/moe-7b-1b-active-shared-experts
(test model)- ✅︎
- ✅︎
-
GritLM
- GritLM
parasail-ai/GritLM-7B-vllm
.- ✅︎
- ✅︎
-
Grok1ModelForCausalLM
- Grok1
hpcai-tech/grok-1
.- ✅︎
- ✅︎
-
InternLMForCausalLM
- InternLM
internlm/internlm-7b
,internlm/internlm-chat-7b
, etc.- ✅︎
- ✅︎
-
InternLM2ForCausalLM
- InternLM2
internlm/internlm2-7b
,internlm/internlm2-chat-7b
, etc.- ✅︎
- ✅︎
-
InternLM3ForCausalLM
- InternLM3
internlm/internlm3-8b-instruct
, etc.- ✅︎
- ✅︎
-
JAISLMHeadModel
- Jais
inceptionai/jais-13b
,inceptionai/jais-13b-chat
,inceptionai/jais-30b-v3
,inceptionai/jais-30b-chat-v3
, etc.- ✅︎
-
JambaForCausalLM
- Jamba
ai21labs/AI21-Jamba-1.5-Large
,ai21labs/AI21-Jamba-1.5-Mini
,ai21labs/Jamba-v0.1
, etc.- ✅︎
- ✅︎
-
LlamaForCausalLM
- Llama 3.1, Llama 3, Llama 2, LLaMA, Yi
meta-llama/Meta-Llama-3.1-405B-Instruct
,meta-llama/Meta-Llama-3.1-70B
,meta-llama/Meta-Llama-3-70B-Instruct
,meta-llama/Llama-2-70b-hf
,01-ai/Yi-34B
, etc.- ✅︎
- ✅︎
-
MambaForCausalLM
- Mamba
state-spaces/mamba-130m-hf
,state-spaces/mamba-790m-hf
,state-spaces/mamba-2.8b-hf
, etc.- ✅︎
-
MiniCPMForCausalLM
- MiniCPM
openbmb/MiniCPM-2B-sft-bf16
,openbmb/MiniCPM-2B-dpo-bf16
,openbmb/MiniCPM-S-1B-sft
, etc.- ✅︎
- ✅︎
-
MiniCPM3ForCausalLM
- MiniCPM3
openbmb/MiniCPM3-4B
, etc.- ✅︎
- ✅︎
-
MistralForCausalLM
- Mistral, Mistral-Instruct
mistralai/Mistral-7B-v0.1
,mistralai/Mistral-7B-Instruct-v0.1
, etc.- ✅︎
- ✅︎
-
MixtralForCausalLM
- Mixtral-8x7B, Mixtral-8x7B-Instruct
mistralai/Mixtral-8x7B-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
,mistral-community/Mixtral-8x22B-v0.1
, etc.- ✅︎
- ✅︎
-
MPTForCausalLM
- MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter
mosaicml/mpt-7b
,mosaicml/mpt-7b-storywriter
,mosaicml/mpt-30b
, etc.- ✅︎
-
NemotronForCausalLM
- Nemotron-3, Nemotron-4, Minitron
nvidia/Minitron-8B-Base
,mgoin/Nemotron-4-340B-Base-hf-FP8
, etc.- ✅︎
- ✅︎
-
OLMoForCausalLM
- OLMo
allenai/OLMo-1B-hf
,allenai/OLMo-7B-hf
, etc.- ✅︎
-
OLMo2ForCausalLM
- OLMo2
allenai/OLMo2-7B-1124
, etc.- ✅︎
-
OLMoEForCausalLM
- OLMoE
allenai/OLMoE-1B-7B-0924
,allenai/OLMoE-1B-7B-0924-Instruct
, etc.- ✅︎
- ✅︎
-
OPTForCausalLM
- OPT, OPT-IML
facebook/opt-66b
,facebook/opt-iml-max-30b
, etc.- ✅︎
-
OrionForCausalLM
- Orion
OrionStarAI/Orion-14B-Base
,OrionStarAI/Orion-14B-Chat
, etc.- ✅︎
-
PhiForCausalLM
- Phi
microsoft/phi-1_5
,microsoft/phi-2
, etc.- ✅︎
- ✅︎
-
Phi3ForCausalLM
- Phi-4, Phi-3
microsoft/Phi-4-mini-instruct
,microsoft/Phi-4
,microsoft/Phi-3-mini-4k-instruct
,microsoft/Phi-3-mini-128k-instruct
,microsoft/Phi-3-medium-128k-instruct
, etc.- ✅︎
- ✅︎
-
Phi3SmallForCausalLM
- Phi-3-Small
microsoft/Phi-3-small-8k-instruct
,microsoft/Phi-3-small-128k-instruct
, etc.- ✅︎
-
PhiMoEForCausalLM
- Phi-3.5-MoE
microsoft/Phi-3.5-MoE-instruct
, etc.- ✅︎
- ✅︎
-
PersimmonForCausalLM
- Persimmon
adept/persimmon-8b-base
,adept/persimmon-8b-chat
, etc.- ✅︎
-
Plamo2ForCausalLM
- PLaMo2
pfnet/plamo-2-1b
,pfnet/plamo-2-8b
, etc.
-
QWenLMHeadModel
- Qwen
Qwen/Qwen-7B
,Qwen/Qwen-7B-Chat
, etc.- ✅︎
- ✅︎
-
Qwen2ForCausalLM
- QwQ, Qwen2
Qwen/QwQ-32B-Preview
,Qwen/Qwen2-7B-Instruct
,Qwen/Qwen2-7B
, etc.- ✅︎
- ✅︎
-
Qwen2MoeForCausalLM
- Qwen2MoE
Qwen/Qwen1.5-MoE-A2.7B
,Qwen/Qwen1.5-MoE-A2.7B-Chat
, etc.- ✅︎
-
Qwen3ForCausalLM
- Qwen3
Qwen/Qwen3-8B
, etc.- ✅︎
- ✅︎
-
Qwen3MoeForCausalLM
- Qwen3MoE
Qwen/Qwen3-MoE-15B-A2B
, etc.- ✅︎
- ✅︎
-
StableLmForCausalLM
- StableLM
stabilityai/stablelm-3b-4e1t
,stabilityai/stablelm-base-alpha-7b-v2
, etc.- ✅︎
-
Starcoder2ForCausalLM
- Starcoder2
bigcode/starcoder2-3b
,bigcode/starcoder2-7b
,bigcode/starcoder2-15b
, etc.- ✅︎
-
SolarForCausalLM
- Solar Pro
upstage/solar-pro-preview-instruct
, etc.- ✅︎
- ✅︎
-
TeleChat2ForCausalLM
- TeleChat2
Tele-AI/TeleChat2-3B
,Tele-AI/TeleChat2-7B
,Tele-AI/TeleChat2-35B
, etc.- ✅︎
- ✅︎
-
TeleFLMForCausalLM
- TeleFLM
CofeAI/FLM-2-52B-Instruct-2407
,CofeAI/Tele-FLM
, etc.- ✅︎
- ✅︎
-
XverseForCausalLM
- XVERSE
xverse/XVERSE-7B-Chat
,xverse/XVERSE-13B-Chat
,xverse/XVERSE-65B-Chat
, etc.- ✅︎
- ✅︎
-
MiniMaxText01ForCausalLM
- MiniMax-Text
MiniMaxAI/MiniMax-Text-01
, etc.- ✅︎
-
Zamba2ForCausalLM
- Zamba2
Zyphra/Zamba2-7B-instruct
,Zyphra/Zamba2-2.7B-instruct
,Zyphra/Zamba2-1.2B-instruct
, etc.
:::
:::{note} Currently, the ROCm version of vLLM supports Mistral and Mixtral only for context lengths up to 4096. :::
Pooling Models
See this page for more information on how to use pooling models.
:::{important} Since some model architectures support both generative and pooling tasks, you should explicitly specify the task type to ensure that the model is used in pooling mode instead of generative mode. :::
Text Embedding (--task embed
)
:::{list-table} :widths: 25 25 50 5 5 :header-rows: 1
-
BertModel
- BERT-based
BAAI/bge-base-en-v1.5
, etc.
-
Gemma2Model
- Gemma 2-based
BAAI/bge-multilingual-gemma2
, etc.- ✅︎
-
GritLM
- GritLM
parasail-ai/GritLM-7B-vllm
.- ✅︎
- ✅︎
-
LlamaModel
,LlamaForCausalLM
,MistralModel
, etc.- Llama-based
intfloat/e5-mistral-7b-instruct
, etc.- ✅︎
- ✅︎
-
Qwen2Model
,Qwen2ForCausalLM
- Qwen2-based
ssmits/Qwen2-7B-Instruct-embed-base
(see note),Alibaba-NLP/gte-Qwen2-7B-instruct
(see note), etc.- ✅︎
- ✅︎
-
RobertaModel
,RobertaForMaskedLM
- RoBERTa-based
sentence-transformers/all-roberta-large-v1
,sentence-transformers/all-roberta-large-v1
, etc.
-
XLMRobertaModel
- XLM-RoBERTa-based
intfloat/multilingual-e5-large
,jinaai/jina-reranker-v2-base-multilingual
, etc.
:::
:::{note}
ssmits/Qwen2-7B-Instruct-embed-base
has an improperly defined Sentence Transformers config.
You should manually set mean pooling by passing --override-pooler-config '{"pooling_type": "MEAN"}'
.
:::
:::{note}
The HF implementation of Alibaba-NLP/gte-Qwen2-1.5B-instruct
is hardcoded to use causal attention despite what is shown in config.json
. To compare vLLM vs HF results,
you should set --hf-overrides '{"is_causal": true}'
in vLLM so that the two implementations are consistent with each other.
For both the 1.5B and 7B variants, you also need to enable --trust-remote-code
for the correct tokenizer to be loaded.
See relevant issue on HF Transformers.
:::
If your model is not in the above list, we will try to automatically convert the model using
{func}~vllm.model_executor.models.adapters.as_embedding_model
. By default, the embeddings
of the whole prompt are extracted from the normalized hidden state corresponding to the last token.
Reward Modeling (--task reward
)
:::{list-table} :widths: 25 25 50 5 5 :header-rows: 1
-
InternLM2ForRewardModel
- InternLM2-based
internlm/internlm2-1_8b-reward
,internlm/internlm2-7b-reward
, etc.- ✅︎
- ✅︎
-
LlamaForCausalLM
- Llama-based
peiyi9979/math-shepherd-mistral-7b-prm
, etc.- ✅︎
- ✅︎
-
Qwen2ForRewardModel
- Qwen2-based
Qwen/Qwen2.5-Math-RM-72B
, etc.- ✅︎
- ✅︎
-
Qwen2ForProcessRewardModel
- Qwen2-based
Qwen/Qwen2.5-Math-PRM-7B
,Qwen/Qwen2.5-Math-PRM-72B
, etc.- ✅︎
- ✅︎ :::
If your model is not in the above list, we will try to automatically convert the model using
{func}~vllm.model_executor.models.adapters.as_reward_model
. By default, we return the hidden states of each token directly.
:::{important}
For process-supervised reward models such as peiyi9979/math-shepherd-mistral-7b-prm
, the pooling config should be set explicitly,
e.g.: --override-pooler-config '{"pooling_type": "STEP", "step_tag_id": 123, "returned_token_ids": [456, 789]}'
.
:::
Classification (--task classify
)
:::{list-table} :widths: 25 25 50 5 5 :header-rows: 1
-
JambaForSequenceClassification
- Jamba
ai21labs/Jamba-tiny-reward-dev
, etc.- ✅︎
- ✅︎
-
Qwen2ForSequenceClassification
- Qwen2-based
jason9693/Qwen2.5-1.5B-apeach
, etc.- ✅︎
- ✅︎ :::
If your model is not in the above list, we will try to automatically convert the model using
{func}~vllm.model_executor.models.adapters.as_classification_model
. By default, the class probabilities are extracted from the softmaxed hidden state corresponding to the last token.
Sentence Pair Scoring (--task score
)
:::{list-table} :widths: 25 25 50 5 5 :header-rows: 1
-
BertForSequenceClassification
- BERT-based
cross-encoder/ms-marco-MiniLM-L-6-v2
, etc.
-
RobertaForSequenceClassification
- RoBERTa-based
cross-encoder/quora-roberta-base
, etc.
-
XLMRobertaForSequenceClassification
- XLM-RoBERTa-based
BAAI/bge-reranker-v2-m3
, etc.
-
ModernBertForSequenceClassification
- ModernBert-based
Alibaba-NLP/gte-reranker-modernbert-base
, etc.
:::
(supported-mm-models)=
List of Multimodal Language Models
The following modalities are supported depending on the model:
- Text
- Image
- Video
- Audio
Any combination of modalities joined by +
are supported.
- e.g.:
T + I
means that the model supports text-only, image-only, and text-with-image inputs.
On the other hand, modalities separated by /
are mutually exclusive.
- e.g.:
T / I
means that the model supports text-only and image-only inputs, but not text-with-image inputs.
See this page on how to pass multi-modal inputs to the model.
:::{important}
To enable multiple multi-modal items per text prompt in vLLM V0, you have to set limit_mm_per_prompt
(offline inference)
or --limit-mm-per-prompt
(online serving). For example, to enable passing up to 4 images per text prompt:
Offline inference:
llm = LLM(
model="Qwen/Qwen2-VL-7B-Instruct",
limit_mm_per_prompt={"image": 4},
)
Online serving:
vllm serve Qwen/Qwen2-VL-7B-Instruct --limit-mm-per-prompt image=4
This is no longer required if you are using vLLM V1.
:::
:::{note} vLLM currently only supports adding LoRA to the language backbone of multimodal models. :::
Generative Models
See this page for more information on how to use generative models.
Text Generation (--task generate
)
:::{list-table} :widths: 25 25 15 20 5 5 5 :header-rows: 1
-
AriaForConditionalGeneration
- Aria
- T + I+
rhymes-ai/Aria
- ✅︎
- ✅︎
-
AyaVisionForConditionalGeneration
- Aya Vision
- T + I+
CohereForAI/aya-vision-8b
,CohereForAI/aya-vision-32b
, etc.- ✅︎
- ✅︎
-
Blip2ForConditionalGeneration
- BLIP-2
- T + IE
Salesforce/blip2-opt-2.7b
,Salesforce/blip2-opt-6.7b
, etc.- ✅︎
- ✅︎
-
ChameleonForConditionalGeneration
- Chameleon
- T + I
facebook/chameleon-7b
etc.- ✅︎
- ✅︎
-
DeepseekVLV2ForCausalLM
^- DeepSeek-VL2
- T + I+
deepseek-ai/deepseek-vl2-tiny
,deepseek-ai/deepseek-vl2-small
,deepseek-ai/deepseek-vl2
etc.- ✅︎
- ✅︎
-
Florence2ForConditionalGeneration
- Florence-2
- T + I
microsoft/Florence-2-base
,microsoft/Florence-2-large
etc.
-
FuyuForCausalLM
- Fuyu
- T + I
adept/fuyu-8b
etc.- ✅︎
- ✅︎
-
Gemma3ForConditionalGeneration
- Gemma 3
- T + I+
google/gemma-3-4b-it
,google/gemma-3-27b-it
, etc.- ✅︎
- ✅︎
- ⚠️
-
GLM4VForCausalLM
^- GLM-4V
- T + I
THUDM/glm-4v-9b
,THUDM/cogagent-9b-20241220
etc.- ✅︎
- ✅︎
- ✅︎
-
H2OVLChatModel
- H2OVL
- T + IE+
h2oai/h2ovl-mississippi-800m
,h2oai/h2ovl-mississippi-2b
, etc.- ✅︎
- ✅︎*
-
Idefics3ForConditionalGeneration
- Idefics3
- T + I
HuggingFaceM4/Idefics3-8B-Llama3
etc.- ✅︎
- ✅︎
-
InternVLChatModel
- InternVL 3.0, InternVideo 2.5, InternVL 2.5, Mono-InternVL, InternVL 2.0
- T + IE+
OpenGVLab/InternVL3-9B
,OpenGVLab/InternVideo2_5_Chat_8B
,OpenGVLab/InternVL2_5-4B
,OpenGVLab/Mono-InternVL-2B
,OpenGVLab/InternVL2-4B
, etc.- ✅︎
- ✅︎
-
KimiVLForConditionalGeneration
- Kimi-VL-A3B-Instruct, Kimi-VL-A3B-Thinking
- T + I+
moonshotai/Kimi-VL-A3B-Instruct
,moonshotai/Kimi-VL-A3B-Thinking
- ✅︎
-
Llama4ForConditionalGeneration
- Llama 4
- T + I+
meta-llama/Llama-4-Scout-17B-16E-Instruct
,meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
,meta-llama/Llama-4-Maverick-17B-128E-Instruct
, etc.- ✅︎
- ✅︎
-
LlavaForConditionalGeneration
- LLaVA-1.5
- T + IE+
llava-hf/llava-1.5-7b-hf
,TIGER-Lab/Mantis-8B-siglip-llama3
(see note), etc.- ✅︎
- ✅︎
-
LlavaNextForConditionalGeneration
- LLaVA-NeXT
- T + IE+
llava-hf/llava-v1.6-mistral-7b-hf
,llava-hf/llava-v1.6-vicuna-7b-hf
, etc.- ✅︎
- ✅︎
-
LlavaNextVideoForConditionalGeneration
- LLaVA-NeXT-Video
- T + V
llava-hf/LLaVA-NeXT-Video-7B-hf
, etc.- ✅︎
- ✅︎
-
LlavaOnevisionForConditionalGeneration
- LLaVA-Onevision
- T + I+ + V+
llava-hf/llava-onevision-qwen2-7b-ov-hf
,llava-hf/llava-onevision-qwen2-0.5b-ov-hf
, etc.- ✅︎
- ✅︎
-
MiniCPMO
- MiniCPM-O
- T + IE+ + VE+ + AE+
openbmb/MiniCPM-o-2_6
, etc.- ✅︎
- ✅︎
- ✅︎
-
MiniCPMV
- MiniCPM-V
- T + IE+ + VE+
openbmb/MiniCPM-V-2
(see note),openbmb/MiniCPM-Llama3-V-2_5
,openbmb/MiniCPM-V-2_6
, etc.- ✅︎
- ✅︎
- ✅︎
-
Mistral3ForConditionalGeneration
- Mistral3
- T + I+
mistralai/Mistral-Small-3.1-24B-Instruct-2503
, etc.- ✅︎
- ✅︎
-
MllamaForConditionalGeneration
- Llama 3.2
- T + I+
meta-llama/Llama-3.2-90B-Vision-Instruct
,meta-llama/Llama-3.2-11B-Vision
, etc.
-
MolmoForCausalLM
- Molmo
- T + I+
allenai/Molmo-7B-D-0924
,allenai/Molmo-7B-O-0924
, etc.- ✅︎
- ✅︎
- ✅︎
-
NVLM_D_Model
- NVLM-D 1.0
- T + I+
nvidia/NVLM-D-72B
, etc.- ✅︎
- ✅︎
-
PaliGemmaForConditionalGeneration
- PaliGemma, PaliGemma 2
- T + IE
google/paligemma-3b-pt-224
,google/paligemma-3b-mix-224
,google/paligemma2-3b-ft-docci-448
, etc.- ✅︎
- ⚠️
-
Phi3VForCausalLM
- Phi-3-Vision, Phi-3.5-Vision
- T + IE+
microsoft/Phi-3-vision-128k-instruct
,microsoft/Phi-3.5-vision-instruct
, etc.- ✅︎
- ✅︎
-
Phi4MMForCausalLM
- Phi-4-multimodal
- T + I+ / T + A+ / I+ + A+
microsoft/Phi-4-multimodal-instruct
, etc.- ✅︎
-
PixtralForConditionalGeneration
- Pixtral
- T + I+
mistralai/Mistral-Small-3.1-24B-Instruct-2503
,mistral-community/pixtral-12b
, etc.- ✅︎
- ✅︎
-
QwenVLForConditionalGeneration
^- Qwen-VL
- T + IE+
Qwen/Qwen-VL
,Qwen/Qwen-VL-Chat
, etc.- ✅︎
- ✅︎
- ✅︎
-
Qwen2AudioForConditionalGeneration
- Qwen2-Audio
- T + A+
Qwen/Qwen2-Audio-7B-Instruct
- ✅︎
- ✅︎
-
Qwen2VLForConditionalGeneration
- QVQ, Qwen2-VL
- T + IE+ + VE+
Qwen/QVQ-72B-Preview
,Qwen/Qwen2-VL-7B-Instruct
,Qwen/Qwen2-VL-72B-Instruct
, etc.- ✅︎
- ✅︎
- ✅︎
-
Qwen2_5_VLForConditionalGeneration
- Qwen2.5-VL
- T + IE+ + VE+
Qwen/Qwen2.5-VL-3B-Instruct
,Qwen/Qwen2.5-VL-72B-Instruct
, etc.- ✅︎
- ✅︎
- ✅︎
-
SkyworkR1VChatModel
- Skywork-R1V-38B
- T + I
Skywork/Skywork-R1V-38B
- ✅︎
- ✅︎
-
SmolVLMForConditionalGeneration
- SmolVLM2
- T + I
SmolVLM2-2.2B-Instruct
- ✅︎
- ✅︎
-
UltravoxModel
- Ultravox
- T + AE+
fixie-ai/ultravox-v0_5-llama-3_2-1b
- ✅︎
- ✅︎
- ✅︎ :::
^ You need to set the architecture name via --hf-overrides
to match the one in vLLM.
• For example, to use DeepSeek-VL2 series models:
--hf-overrides '{"architectures": ["DeepseekVLV2ForCausalLM"]}'
E Pre-computed embeddings can be inputted for this modality.
+ Multiple items can be inputted per text prompt for this modality.
:::{important}
Pan-and-scan image pre-processing is currently supported on V0 (but not V1).
You can enable it by passing --mm-processor-kwargs '{"do_pan_and_scan": True}'
.
:::
:::{warning}
Both V0 and V1 support Gemma3ForConditionalGeneration
for text-only inputs.
However, there are differences in how they handle text + image inputs:
V0 correctly implements the model's attention pattern:
- Uses bidirectional attention between the image tokens corresponding to the same image
- Uses causal attention for other tokens
- Implemented via (naive) PyTorch SDPA with masking tensors
- Note: May use significant memory for long prompts with image
V1 currently uses a simplified attention pattern:
- Uses causal attention for all tokens, including image tokens
- Generates reasonable outputs but does not match the original model's attention for text + image inputs, especially when
{"do_pan_and_scan": True}
- Will be updated in the future to support the correct behavior
This limitation exists because the model's mixed attention pattern (bidirectional for images, causal otherwise) is not yet supported by vLLM's attention backends. :::
:::{note}
h2oai/h2ovl-mississippi-2b
will be available in V1 once we support backends other than FlashAttention.
:::
:::{note}
To use TIGER-Lab/Mantis-8B-siglip-llama3
, you have to pass --hf_overrides '{"architectures": ["MantisForConditionalGeneration"]}'
when running vLLM.
:::
:::{note}
The official openbmb/MiniCPM-V-2
doesn't work yet, so we need to use a fork (HwwwH/MiniCPM-V-2
) for now.
For more details, please see: gh-pr:4087#issuecomment-2250397630
:::
:::{warning} Our PaliGemma implementations have the same problem as Gemma 3 (see above) for both V0 and V1. :::
Pooling Models
See this page for more information on how to use pooling models.
:::{important} Since some model architectures support both generative and pooling tasks, you should explicitly specify the task type to ensure that the model is used in pooling mode instead of generative mode. :::
Text Embedding (--task embed
)
Any text generation model can be converted into an embedding model by passing --task embed
.
:::{note} To get the best results, you should use pooling models that are specifically trained as such. :::
The following table lists those that are tested in vLLM.
:::{list-table} :widths: 25 25 15 25 5 5 :header-rows: 1
-
LlavaNextForConditionalGeneration
- LLaVA-NeXT-based
- T / I
royokong/e5-v
- ✅︎
-
Phi3VForCausalLM
- Phi-3-Vision-based
- T + I
TIGER-Lab/VLM2Vec-Full
- 🚧
- ✅︎
-
Qwen2VLForConditionalGeneration
- Qwen2-VL-based
- T + I
MrLight/dse-qwen2-2b-mrl-v1
- ✅︎ :::
Transcription (--task transcription
)
Speech2Text models trained specifically for Automatic Speech Recognition.
:::{list-table} :widths: 25 25 25 5 5 :header-rows: 1
-
Whisper
- Whisper-based
openai/whisper-large-v3-turbo
- 🚧
- 🚧 :::
Model Support Policy
At vLLM, we are committed to facilitating the integration and support of third-party models within our ecosystem. Our approach is designed to balance the need for robustness and the practical limitations of supporting a wide range of models. Here’s how we manage third-party model support:
-
Community-Driven Support: We encourage community contributions for adding new models. When a user requests support for a new model, we welcome pull requests (PRs) from the community. These contributions are evaluated primarily on the sensibility of the output they generate, rather than strict consistency with existing implementations such as those in transformers. Call for contribution: PRs coming directly from model vendors are greatly appreciated!
-
Best-Effort Consistency: While we aim to maintain a level of consistency between the models implemented in vLLM and other frameworks like transformers, complete alignment is not always feasible. Factors like acceleration techniques and the use of low-precision computations can introduce discrepancies. Our commitment is to ensure that the implemented models are functional and produce sensible results.
:::{tip} When comparing the output of
model.generate
from Hugging Face Transformers with the output ofllm.generate
from vLLM, note that the former reads the model's generation config file (i.e., generation_config.json) and applies the default parameters for generation, while the latter only uses the parameters passed to the function. Ensure all sampling parameters are identical when comparing outputs. ::: -
Issue Resolution and Model Updates: Users are encouraged to report any bugs or issues they encounter with third-party models. Proposed fixes should be submitted via PRs, with a clear explanation of the problem and the rationale behind the proposed solution. If a fix for one model impacts another, we rely on the community to highlight and address these cross-model dependencies. Note: for bugfix PRs, it is good etiquette to inform the original author to seek their feedback.
-
Monitoring and Updates: Users interested in specific models should monitor the commit history for those models (e.g., by tracking changes in the main/vllm/model_executor/models directory). This proactive approach helps users stay informed about updates and changes that may affect the models they use.
-
Selective Focus: Our resources are primarily directed towards models with significant user interest and impact. Models that are less frequently used may receive less attention, and we rely on the community to play a more active role in their upkeep and improvement.
Through this approach, vLLM fosters a collaborative environment where both the core development team and the broader community contribute to the robustness and diversity of the third-party models supported in our ecosystem.
Note that, as an inference engine, vLLM does not introduce new models. Therefore, all models supported by vLLM are third-party models in this regard.
We have the following levels of testing for models:
- Strict Consistency: We compare the output of the model with the output of the model in the HuggingFace Transformers library under greedy decoding. This is the most stringent test. Please refer to models tests for the models that have passed this test.
- Output Sensibility: We check if the output of the model is sensible and coherent, by measuring the perplexity of the output and checking for any obvious errors. This is a less stringent test.
- Runtime Functionality: We check if the model can be loaded and run without errors. This is the least stringent test. Please refer to functionality tests and examples for the models that have passed this test.
- Community Feedback: We rely on the community to provide feedback on the models. If a model is broken or not working as expected, we encourage users to raise issues to report it or open pull requests to fix it. The rest of the models fall under this category.