2024-12-23 17:35:38 -05:00
(pooling-models)=
# Pooling Models
vLLM also supports pooling models, including embedding, reranking and reward models.
In vLLM, pooling models implement the {class}`~vllm.model_executor.models.VllmModelForPooling` interface.
These models use a {class}`~vllm.model_executor.layers.Pooler` to extract the final hidden states of the input
before returning them.
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-23 17:35:38 -05:00
We currently support pooling models primarily as a matter of convenience.
As shown in the [Compatibility Matrix ](#compatibility-matrix ), most vLLM features are not applicable to
pooling models as they only work on the generation or decode stage, so performance may not improve as much.
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-01-10 11:25:20 +08:00
For pooling models, we support the following `--task` options.
The selected option sets the default pooler used to extract the final hidden states:
2025-01-29 03:38:29 +00:00
:::{list-table}
2025-01-10 11:25:20 +08:00
:widths: 50 25 25 25
:header-rows: 1
2025-01-29 03:38:29 +00:00
- * Task
* Pooling Type
* Normalization
* Softmax
- * Embedding (`embed` )
* `LAST`
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
2025-01-29 03:38:29 +00:00
- * Classification (`classify` )
* `LAST`
2025-02-18 10:52:39 +00:00
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
- * Sentence Pair Scoring (`score` )
* \*
* \*
* \*
- * Reward Modeling (`reward` )
* `ALL`
2025-02-18 10:52:39 +00:00
* ❌
* ❌
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-01-10 11:25:20 +08:00
\*The default pooler is always defined by the model.
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{note}
2025-01-10 11:25:20 +08:00
If the model's implementation in vLLM defines its own pooler, the default pooler is set to that instead of the one specified in this table.
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
When loading [Sentence Transformers ](https://huggingface.co/sentence-transformers ) models,
2025-01-10 11:25:20 +08:00
we attempt to override the default pooler based on its Sentence Transformers configuration file (`modules.json` ).
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{tip}
2025-01-10 11:25:20 +08:00
You can customize the model's pooling method via the `--override-pooler-config` option,
2024-12-23 17:35:38 -05:00
which takes priority over both the model's and Sentence Transformers's defaults.
2025-01-29 03:38:29 +00:00
:::
2025-01-10 11:25:20 +08:00
## Offline Inference
The {class}`~vllm.LLM` class provides various methods for offline inference.
See [Engine Arguments ](#engine-args ) for a list of options when initializing the model.
2024-12-23 17:35:38 -05:00
### `LLM.encode`
The {class}`~vllm.LLM.encode` method is available to all pooling models in vLLM.
It returns the extracted hidden states directly, which is useful for reward models.
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
llm = LLM(model="Qwen/Qwen2.5-Math-RM-72B", task="reward")
(output,) = llm.encode("Hello, my name is")
data = output.outputs.data
print(f"Data: {data!r}")
```
### `LLM.embed`
The {class}`~vllm.LLM.embed` method outputs an embedding vector for each prompt.
It is primarily designed for embedding models.
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
llm = LLM(model="intfloat/e5-mistral-7b-instruct", task="embed")
(output,) = llm.embed("Hello, my name is")
embeds = output.outputs.embedding
print(f"Embeddings: {embeds!r} (size={len(embeds)})")
```
2025-02-20 12:53:51 +00:00
A code example can be found here: < gh-file:examples / offline_inference / basic / embed . py >
2024-12-23 17:35:38 -05:00
### `LLM.classify`
The {class}`~vllm.LLM.classify` method outputs a probability vector for each prompt.
It is primarily designed for classification models.
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
llm = LLM(model="jason9693/Qwen2.5-1.5B-apeach", task="classify")
(output,) = llm.classify("Hello, my name is")
probs = output.outputs.probs
print(f"Class Probabilities: {probs!r} (size={len(probs)})")
```
2025-02-20 12:53:51 +00:00
A code example can be found here: < gh-file:examples / offline_inference / basic / classify . py >
2024-12-23 17:35:38 -05:00
### `LLM.score`
The {class}`~vllm.LLM.score` method outputs similarity scores between sentence pairs.
2025-02-21 03:09:47 -03:00
It is designed for embedding models and cross encoder models. Embedding models use cosine similarity, and [cross-encoder models ](https://www.sbert.net/examples/applications/cross-encoder/README.html ) serve as rerankers between candidate query-document pairs in RAG systems.
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-23 17:35:38 -05:00
vLLM can only perform the model inference component (e.g. embedding, reranking) of RAG.
To handle RAG at a higher level, you should use integration frameworks such as [LangChain ](https://github.com/langchain-ai/langchain ).
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
llm = LLM(model="BAAI/bge-reranker-v2-m3", task="score")
(output,) = llm.score("What is the capital of France?",
"The capital of Brazil is Brasilia.")
score = output.outputs.score
print(f"Score: {score}")
```
2025-02-20 12:53:51 +00:00
A code example can be found here: < gh-file:examples / offline_inference / basic / score . py >
2024-12-23 17:35:38 -05:00
2025-01-10 12:05:56 +00:00
## Online Serving
2024-12-23 17:35:38 -05:00
2025-01-06 10:18:33 +08:00
Our [OpenAI-Compatible Server ](#openai-compatible-server ) provides endpoints that correspond to the offline APIs:
2024-12-23 17:35:38 -05:00
2024-12-24 17:54:30 +08:00
- [Pooling API ](#pooling-api ) is similar to `LLM.encode` , being applicable to all types of pooling models.
- [Embeddings API ](#embeddings-api ) is similar to `LLM.embed` , accepting both text and [multi-modal inputs ](#multimodal-inputs ) for embedding models.
- [Score API ](#score-api ) is similar to `LLM.score` for cross-encoder models.
2025-04-17 21:37:37 +08:00
## Matryoshka Embeddings
[Matryoshka Embeddings ](https://sbert.net/examples/sentence_transformer/training/matryoshka/README.html#matryoshka-embeddings ) or [Matryoshka Representation Learning (MRL) ](https://arxiv.org/abs/2205.13147 ) is a technique used in training embedding models. It allows user to trade off between performance and cost.
:::{warning}
Not all embedding models are trained using Matryoshka Representation Learning. To avoid misuse of the `dimensions` parameter, vLLM returns an error for requests that attempt to change the output dimension of models that do not support Matryoshka Embeddings.
For example, setting `dimensions` parameter while using the `BAAI/bge-m3` model will result in the following error.
```json
{"object":"error","message":"Model \"BAAI/bge-m3\" does not support matryoshka representation, changing output dimensions will lead to poor results.","type":"BadRequestError","param":null,"code":400}
```
:::
### Manually enable Matryoshka Embeddings
There is currently no official interface for specifying support for Matryoshka Embeddings. In vLLM, we simply check the existence of the fields `is_matryoshka` or `matryoshka_dimensions` inside `config.json` .
For models that support Matryoshka Embeddings but not recognized by vLLM, please manually override the config using `hf_overrides={"is_matryoshka": True}` (offline) or `--hf_overrides '{"is_matryoshka": true}'` (online).
Here is an example to serve a model with Matryoshka Embeddings enabled.
```text
vllm serve Snowflake/snowflake-arctic-embed-m-v1.5 --hf_overrides '{"is_matryoshka":true}'
```
### Offline Inference
You can change the output dimensions of embedding models that support Matryoshka Embeddings by using the dimensions parameter in {class}`~vllm.PoolingParams` .
```python
from vllm import LLM, PoolingParams
model = LLM(model="jinaai/jina-embeddings-v3",
task="embed",
trust_remote_code=True)
outputs = model.embed(["Follow the white rabbit."],
pooling_params=PoolingParams(dimensions=32))
print(outputs[0].outputs)
```
A code example can be found here: < gh-file:examples / offline_inference / embed_matryoshka_fy . py >
### Online Inference
Use the following command to start vllm server.
```text
vllm serve jinaai/jina-embeddings-v3 --trust-remote-code
```
You can change the output dimensions of embedding models that support Matryoshka Embeddings by using the dimensions parameter.
```text
curl http://127.0.0.1:8000/v1/embeddings \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"input": "Follow the white rabbit.",
"model": "jinaai/jina-embeddings-v3",
"encoding_format": "float",
"dimensions": 1
}'
```
Expected output:
```json
{"id":"embd-0aab28c384d348c3b8f0eb783109dc5f","object":"list","created":1744195454,"model":"jinaai/jina-embeddings-v3","data":[{"index":0,"object":"embedding","embedding":[-1.0]}],"usage":{"prompt_tokens":10,"total_tokens":10,"completion_tokens":0,"prompt_tokens_details":null}}
```
A openai client example can be found here: < gh-file:examples / online_serving / openai_embedding_matryoshka_fy . py >