2025-01-07 11:20:01 +08:00
(openai-compatible-server)=
2024-03-18 22:05:34 -07:00
2025-01-07 11:20:01 +08:00
# OpenAI-Compatible Server
2024-03-18 22:05:34 -07:00
2025-04-11 18:54:58 -04:00
vLLM provides an HTTP server that implements OpenAI's [Completions API ](https://platform.openai.com/docs/api-reference/completions ), [Chat API ](https://platform.openai.com/docs/api-reference/chat ), and more! This functionality lets you serve models and interact with them using an HTTP client.
2025-01-07 11:20:01 +08:00
2025-04-11 18:54:58 -04:00
In your terminal, you can [install ](../getting_started/installation.md ) vLLM, then start the server with the [`vllm serve` ](#vllm-serve ) command. (You can also use our [Docker ](#deployment-docker ) image.)
2025-01-12 03:17:13 -05:00
2024-03-18 22:05:34 -07:00
```bash
2024-07-17 15:43:21 +08:00
vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123
2024-03-18 22:05:34 -07:00
```
2025-04-11 18:54:58 -04:00
To call the server, in your preferred text editor, create a script that uses an HTTP client. Include any messages that you want to send to the model. Then run that script. Below is an example script using the [official OpenAI Python client ](https://github.com/openai/openai-python ).
2025-01-12 03:17:13 -05:00
2024-03-18 22:05:34 -07:00
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123",
)
completion = client.chat.completions.create(
2024-05-01 19:14:16 +02:00
model="NousResearch/Meta-Llama-3-8B-Instruct",
2024-03-18 22:05:34 -07:00
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
```
2025-03-21 02:18:36 +00:00
:::{tip}
vLLM supports some parameters that are not supported by OpenAI, `top_k` for example.
You can pass these parameters to vLLM using the OpenAI client in the `extra_body` parameter of your requests, i.e. `extra_body={"top_k": 50}` for `top_k` .
:::
2025-04-17 17:54:34 +08:00
2025-03-23 14:00:55 -07:00
:::{important}
By default, the server applies `generation_config.json` from the Hugging Face model repository if it exists. This means the default values of certain sampling parameters can be overridden by those recommended by the model creator.
2025-03-21 02:18:36 +00:00
2025-03-23 14:00:55 -07:00
To disable this behavior, please pass `--generation-config vllm` when launching the server.
:::
2025-04-17 17:54:34 +08:00
2024-12-14 00:22:22 +08:00
## Supported APIs
2024-03-18 22:05:34 -07:00
2024-11-01 16:13:35 +08:00
We currently support the following OpenAI APIs:
2024-12-14 00:22:22 +08:00
- [Completions API ](#completions-api ) (`/v1/completions` )
2024-12-23 17:35:38 -05:00
- Only applicable to [text generation models ](../models/generative_models.md ) (`--task generate` ).
2024-11-01 16:13:35 +08:00
- *Note: `suffix` parameter is not supported.*
2024-12-14 00:22:22 +08:00
- [Chat Completions API ](#chat-api ) (`/v1/chat/completions` )
2024-12-23 17:35:38 -05:00
- Only applicable to [text generation models ](../models/generative_models.md ) (`--task generate` ) with a [chat template ](#chat-template ).
2024-11-01 16:13:35 +08:00
- *Note: `parallel_tool_calls` and `user` parameters are ignored.*
2024-12-14 00:22:22 +08:00
- [Embeddings API ](#embeddings-api ) (`/v1/embeddings` )
2024-12-23 17:35:38 -05:00
- Only applicable to [embedding models ](../models/pooling_models.md ) (`--task embed` ).
2025-02-13 16:23:45 +01:00
- [Transcriptions API ](#transcriptions-api ) (`/v1/audio/transcriptions` )
- Only applicable to Automatic Speech Recognition (ASR) models (OpenAI Whisper) (`--task generate` ).
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
In addition, we have the following custom APIs:
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
- [Tokenizer API ](#tokenizer-api ) (`/tokenize` , `/detokenize` )
- Applicable to any model with a tokenizer.
2024-12-24 17:54:30 +08:00
- [Pooling API ](#pooling-api ) (`/pooling` )
- Applicable to all [pooling models ](../models/pooling_models.md ).
2024-12-14 00:22:22 +08:00
- [Score API ](#score-api ) (`/score` )
2025-02-21 03:09:47 -03:00
- Applicable to embedding models and [cross-encoder models ](../models/pooling_models.md ) (`--task score` ).
2025-01-26 20:58:45 -06:00
- [Re-rank API ](#rerank-api ) (`/rerank` , `/v1/rerank` , `/v2/rerank` )
- Implements [Jina AI's v1 re-rank API ](https://jina.ai/reranker/ )
- Also compatible with [Cohere's v1 & v2 re-rank APIs ](https://docs.cohere.com/v2/reference/rerank )
- Jina and Cohere's APIs are very similar; Jina's includes extra information in the rerank endpoint's response.
- Only applicable to [cross-encoder models ](../models/pooling_models.md ) (`--task score` ).
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
(chat-template)=
2025-01-12 03:17:13 -05:00
2024-12-14 00:22:22 +08:00
## Chat Template
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
In order for the language model to support chat protocol, vLLM requires the model to include
a chat template in its tokenizer configuration. The chat template is a Jinja2 template that
specifies how are roles, messages, and other chat-specific tokens are encoded in the input.
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
An example chat template for `NousResearch/Meta-Llama-3-8B-Instruct` can be found [here ](https://github.com/meta-llama/llama3?tab=readme-ov-file#instruction-tuned-models )
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
Some models do not provide a chat template even though they are instruction/chat fine-tuned. For those model,
you can manually specify their chat template in the `--chat-template` parameter with the file path to the chat
template, or the template in string form. Without a chat template, the server will not be able to process chat
and all chat requests will error.
2024-11-24 23:56:20 -03:00
```bash
2024-12-14 00:22:22 +08:00
vllm serve < model > --chat-template ./path-to-chat-template.jinja
2024-11-24 23:56:20 -03:00
```
2024-12-26 06:49:26 +08:00
vLLM community provides a set of chat templates for popular models. You can find them under the < gh-dir:examples > directory.
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
With the inclusion of multi-modal chat APIs, the OpenAI spec now accepts chat messages in a new format which specifies
both a `type` and a `text` field. An example is provided below:
2025-01-12 03:17:13 -05:00
2024-12-14 00:22:22 +08:00
```python
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": [{"type": "text", "text": "Classify this sentiment: vLLM is wonderful!"}]}
2024-11-24 23:56:20 -03:00
]
2024-12-14 00:22:22 +08:00
)
2024-11-24 23:56:20 -03:00
```
2025-01-12 03:17:13 -05:00
Most chat templates for LLMs expect the `content` field to be a string, but there are some newer models like
2024-12-14 00:22:22 +08:00
`meta-llama/Llama-Guard-3-1B` that expect the content to be formatted according to the OpenAI schema in the
request. vLLM provides best-effort support to detect this automatically, which is logged as a string like
*"Detected the chat template content format to be..."*, and internally converts incoming requests to match
the detected format, which can be one of:
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
- `"string"` : A string.
- Example: `"Hello world"`
- `"openai"` : A list of dictionaries, similar to OpenAI schema.
- Example: `[{"type": "text", "text": "Hello world!"}]`
2024-11-24 23:56:20 -03:00
2024-12-14 00:22:22 +08:00
If the result is not what you expect, you can set the `--chat-template-content-format` CLI argument
to override which format to use.
2024-11-24 23:56:20 -03:00
2024-03-18 22:05:34 -07:00
## Extra Parameters
2024-11-01 16:13:35 +08:00
2024-03-18 22:05:34 -07:00
vLLM supports a set of parameters that are not part of the OpenAI API.
In order to use them, you can pass them as extra parameters in the OpenAI client.
Or directly merge them into the JSON payload if you are using HTTP call directly.
```python
completion = client.chat.completions.create(
2024-05-01 19:14:16 +02:00
model="NousResearch/Meta-Llama-3-8B-Instruct",
2024-03-18 22:05:34 -07:00
messages=[
{"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
],
extra_body={
"guided_choice": ["positive", "negative"]
}
)
```
2024-12-14 00:22:22 +08:00
## Extra HTTP Headers
2024-11-09 19:18:29 +09:00
2024-12-26 19:26:18 -05:00
Only `X-Request-Id` HTTP request header is supported for now. It can be enabled
2025-01-12 03:17:13 -05:00
with `--enable-request-id-headers` .
2024-12-26 19:26:18 -05:00
> Note that enablement of the headers can impact performance significantly at high QPS
> rates. We recommend implementing HTTP headers at the router level (e.g. via Istio),
> rather than within the vLLM layer for this reason.
2025-01-12 03:17:13 -05:00
> See [this PR](https://github.com/vllm-project/vllm/pull/11529) for more details.
2024-11-09 19:18:29 +09:00
```python
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
],
extra_headers={
"x-request-id": "sentiment-classification-00001",
}
)
print(completion._request_id)
completion = client.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
prompt="A robot may not injure a human being",
extra_headers={
"x-request-id": "completion-test",
}
)
print(completion._request_id)
```
2024-12-14 00:22:22 +08:00
## CLI Reference
(vllm-serve)=
2025-01-12 03:17:13 -05:00
2024-12-14 00:22:22 +08:00
### `vllm serve`
The `vllm serve` command is used to launch the OpenAI-compatible server.
2025-04-17 17:54:34 +08:00
:::{tip}
The vast majority of command-line arguments are based on those for offline inference.
See [here ](configuration-options ) for some common options.
:::
2025-01-29 03:38:29 +00:00
:::{argparse}
2024-12-14 00:22:22 +08:00
:module: vllm.entrypoints.openai.cli_args
:func: create_parser_for_docs
:prog: vllm serve
2025-01-29 03:38:29 +00:00
:::
2024-12-14 00:22:22 +08:00
#### Configuration file
You can load CLI arguments via a [YAML ](https://yaml.org/ ) config file.
The argument names must be the long form of those outlined [above ](#vllm-serve ).
For example:
```yaml
# config.yaml
2025-04-01 01:20:06 -07:00
model: meta-llama/Llama-3.1-8B-Instruct
2024-12-14 00:22:22 +08:00
host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"
```
To use the above config file:
```bash
2025-04-01 01:20:06 -07:00
vllm serve --config config.yaml
2024-12-14 00:22:22 +08:00
```
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-14 00:22:22 +08:00
In case an argument is supplied simultaneously using command line and the config file, the value from the command line will take precedence.
The order of priorities is `command line > config file values > defaults` .
2025-04-01 01:20:06 -07:00
e.g. `vllm serve SOME_MODEL --config config.yaml` , SOME_MODEL takes precedence over `model` in config file.
2025-01-29 03:38:29 +00:00
:::
2024-12-14 00:22:22 +08:00
## API Reference
(completions-api)=
2025-01-12 03:17:13 -05:00
2024-12-14 00:22:22 +08:00
### Completions API
2024-12-24 17:54:30 +08:00
Our Completions API is compatible with [OpenAI's Completions API ](https://platform.openai.com/docs/api-reference/completions );
you can use the [official OpenAI Python client ](https://github.com/openai/openai-python ) to interact with it.
2025-01-08 13:09:53 +00:00
Code example: < gh-file:examples / online_serving / openai_completion_client . py >
2024-12-14 00:22:22 +08:00
#### Extra parameters
2024-11-01 16:13:35 +08:00
2025-01-08 21:34:44 +08:00
The following [sampling parameters ](#sampling-params ) are supported.
2024-11-01 16:13:35 +08:00
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-11-01 16:13:35 +08:00
:language: python
:start-after: begin-completion-sampling-params
:end-before: end-completion-sampling-params
2025-01-29 03:38:29 +00:00
:::
2024-11-01 16:13:35 +08:00
The following extra parameters are supported:
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-11-01 16:13:35 +08:00
:language: python
:start-after: begin-completion-extra-params
:end-before: end-completion-extra-params
2025-01-29 03:38:29 +00:00
:::
2024-11-01 16:13:35 +08:00
2024-12-14 00:22:22 +08:00
(chat-api)=
2025-01-12 03:17:13 -05:00
2024-12-24 17:54:30 +08:00
### Chat API
2024-12-14 00:22:22 +08:00
2024-12-24 17:54:30 +08:00
Our Chat API is compatible with [OpenAI's Chat Completions API ](https://platform.openai.com/docs/api-reference/chat );
you can use the [official OpenAI Python client ](https://github.com/openai/openai-python ) to interact with it.
2024-12-14 00:22:22 +08:00
2024-12-17 00:09:58 -06:00
We support both [Vision ](https://platform.openai.com/docs/guides/vision )- and
[Audio ](https://platform.openai.com/docs/guides/audio?audio-generation-quickstart-example=audio-in )-related parameters;
2025-01-06 21:40:31 +08:00
see our [Multimodal Inputs ](#multimodal-inputs ) guide for more information.
2024-12-17 00:09:58 -06:00
- *Note: `image_url.detail` parameter is not supported.*
2025-01-08 13:09:53 +00:00
Code example: < gh-file:examples / online_serving / openai_chat_completion_client . py >
2024-12-24 17:54:30 +08:00
2024-12-14 00:22:22 +08:00
#### Extra parameters
2024-11-01 16:13:35 +08:00
2025-01-08 21:34:44 +08:00
The following [sampling parameters ](#sampling-params ) are supported.
2024-03-18 22:05:34 -07:00
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-03-18 22:05:34 -07:00
:language: python
:start-after: begin-chat-completion-sampling-params
:end-before: end-chat-completion-sampling-params
2025-01-29 03:38:29 +00:00
:::
2024-03-18 22:05:34 -07:00
The following extra parameters are supported:
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-03-18 22:05:34 -07:00
:language: python
:start-after: begin-chat-completion-extra-params
:end-before: end-chat-completion-extra-params
2025-01-29 03:38:29 +00:00
:::
2024-03-18 22:05:34 -07:00
2024-12-14 00:22:22 +08:00
(embeddings-api)=
2025-01-12 03:17:13 -05:00
2024-12-14 00:22:22 +08:00
### Embeddings API
2024-12-24 17:54:30 +08:00
Our Embeddings API is compatible with [OpenAI's Embeddings API ](https://platform.openai.com/docs/api-reference/embeddings );
you can use the [official OpenAI Python client ](https://github.com/openai/openai-python ) to interact with it.
2024-12-14 00:22:22 +08:00
2024-12-24 17:54:30 +08:00
If the model has a [chat template ](#chat-template ), you can replace `inputs` with a list of `messages` (same schema as [Chat API ](#chat-api ))
2024-12-14 00:22:22 +08:00
which will be treated as a single prompt to the model.
2025-02-28 15:12:04 +08:00
Code example: < gh-file:examples / online_serving / openai_embedding_client . py >
#### Multi-modal inputs
You can pass multi-modal inputs to embedding models by defining a custom chat template for the server
and passing a list of `messages` in the request. Refer to the examples below for illustration.
:::::{tab-set}
::::{tab-item} VLM2Vec
To serve the model:
```bash
vllm serve TIGER-Lab/VLM2Vec-Full --task embed \
--trust-remote-code --max-model-len 4096 --chat-template examples/template_vlm2vec.jinja
```
:::{important}
Since VLM2Vec has the same model architecture as Phi-3.5-Vision, we have to explicitly pass `--task embed`
to run this model in embedding mode instead of text generation mode.
The custom chat template is completely different from the original one for this model,
and can be found here: < gh-file:examples / template_vlm2vec . jinja >
2025-01-29 03:38:29 +00:00
:::
2024-12-14 00:22:22 +08:00
2025-02-28 15:12:04 +08:00
Since the request schema is not defined by OpenAI client, we post a request to the server using the lower-level `requests` library:
```python
import requests
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
response = requests.post(
"http://localhost:8000/v1/embeddings",
json={
"model": "TIGER-Lab/VLM2Vec-Full",
"messages": [{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": image_url}},
{"type": "text", "text": "Represent the given image."},
],
}],
"encoding_format": "float",
},
)
response.raise_for_status()
response_json = response.json()
print("Embedding output:", response_json["data"][0]["embedding"])
```
::::
::::{tab-item} DSE-Qwen2-MRL
To serve the model:
```bash
vllm serve MrLight/dse-qwen2-2b-mrl-v1 --task embed \
--trust-remote-code --max-model-len 8192 --chat-template examples/template_dse_qwen2_vl.jinja
```
:::{important}
Like with VLM2Vec, we have to explicitly pass `--task embed` .
Additionally, `MrLight/dse-qwen2-2b-mrl-v1` requires an EOS token for embeddings, which is handled
by a custom chat template: < gh-file:examples / template_dse_qwen2_vl . jinja >
:::
:::{important}
`MrLight/dse-qwen2-2b-mrl-v1` requires a placeholder image of the minimum image size for text query embeddings. See the full code
example below for details.
:::
::::
:::::
Full example: < gh-file:examples / online_serving / openai_chat_embedding_client_for_multimodal . py >
2024-12-24 17:54:30 +08:00
2024-12-14 00:22:22 +08:00
#### Extra parameters
2024-11-01 16:13:35 +08:00
2025-01-08 21:34:44 +08:00
The following [pooling parameters ](#pooling-params ) are supported.
2024-03-18 22:05:34 -07:00
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-03-18 22:05:34 -07:00
:language: python
2024-11-01 16:13:35 +08:00
:start-after: begin-embedding-pooling-params
:end-before: end-embedding-pooling-params
2025-01-29 03:38:29 +00:00
:::
2024-03-18 22:05:34 -07:00
2024-12-14 00:22:22 +08:00
The following extra parameters are supported by default:
2024-03-18 22:05:34 -07:00
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-03-18 22:05:34 -07:00
:language: python
2024-11-01 16:13:35 +08:00
:start-after: begin-embedding-extra-params
:end-before: end-embedding-extra-params
2025-01-29 03:38:29 +00:00
:::
2024-03-18 22:05:34 -07:00
2024-12-14 00:22:22 +08:00
For chat-like input (i.e. if `messages` is passed), these extra parameters are supported instead:
2024-03-18 22:05:34 -07:00
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-12-14 00:22:22 +08:00
:language: python
:start-after: begin-chat-embedding-extra-params
:end-before: end-chat-embedding-extra-params
2025-01-29 03:38:29 +00:00
:::
2024-03-18 22:05:34 -07:00
2025-02-13 16:23:45 +01:00
(transcriptions-api)=
### Transcriptions API
Our Transcriptions API is compatible with [OpenAI's Transcriptions API ](https://platform.openai.com/docs/api-reference/audio/createTranscription );
you can use the [official OpenAI Python client ](https://github.com/openai/openai-python ) to interact with it.
2025-03-06 11:39:35 +01:00
:::{note}
To use the Transcriptions API, please install with extra audio dependencies using `pip install vllm[audio]` .
:::
2025-02-13 16:23:45 +01:00
<!-- TODO: api enforced limits + uploading audios -->
Code example: < gh-file:examples / online_serving / openai_transcription_client . py >
2024-12-14 00:22:22 +08:00
(tokenizer-api)=
2025-01-12 03:17:13 -05:00
2024-12-14 00:22:22 +08:00
### Tokenizer API
2024-03-18 22:05:34 -07:00
2024-12-24 17:54:30 +08:00
Our Tokenizer API is a simple wrapper over [HuggingFace-style tokenizers ](https://huggingface.co/docs/transformers/en/main_classes/tokenizer ).
2024-12-14 00:22:22 +08:00
It consists of two endpoints:
- `/tokenize` corresponds to calling `tokenizer.encode()` .
- `/detokenize` corresponds to calling `tokenizer.decode()` .
2024-12-24 17:54:30 +08:00
(pooling-api)=
2025-01-12 03:17:13 -05:00
2024-12-24 17:54:30 +08:00
### Pooling API
Our Pooling API encodes input prompts using a [pooling model ](../models/pooling_models.md ) and returns the corresponding hidden states.
The input format is the same as [Embeddings API ](#embeddings-api ), but the output data can contain an arbitrary nested list, not just a 1-D list of floats.
2025-01-08 13:09:53 +00:00
Code example: < gh-file:examples / online_serving / openai_pooling_client . py >
2024-12-24 17:54:30 +08:00
2024-12-14 00:22:22 +08:00
(score-api)=
2025-01-12 03:17:13 -05:00
2024-12-14 00:22:22 +08:00
### Score API
2025-02-21 03:09:47 -03:00
Our Score API can apply a cross-encoder model or an embedding model to predict scores for sentence pairs. When using an embedding model the score corresponds to the cosine similarity between each embedding pair.
2024-12-14 00:22:22 +08:00
Usually, the score for a sentence pair refers to the similarity between two sentences, on a scale of 0 to 1.
2025-02-21 03:09:47 -03:00
You can find the documentation for cross encoder models at [sbert.net ](https://www.sbert.net/docs/package_reference/cross_encoder/cross_encoder.html ).
2024-12-14 00:22:22 +08:00
2025-01-08 13:09:53 +00:00
Code example: < gh-file:examples / online_serving / openai_cross_encoder_score . py >
2024-12-24 17:54:30 +08:00
2024-12-14 00:22:22 +08:00
#### Single inference
You can pass a string to both `text_1` and `text_2` , forming a single sentence pair.
Request:
2024-03-18 22:05:34 -07:00
```bash
2024-12-14 00:22:22 +08:00
curl -X 'POST' \
'http://127.0.0.1:8000/score' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "BAAI/bge-reranker-v2-m3",
"encoding_format": "float",
"text_1": "What is the capital of France?",
"text_2": "The capital of France is Paris."
}'
2024-03-18 22:05:34 -07:00
```
2024-12-14 00:22:22 +08:00
Response:
2024-03-18 22:05:34 -07:00
2024-12-14 00:22:22 +08:00
```bash
{
"id": "score-request-id",
"object": "list",
"created": 693447,
"model": "BAAI/bge-reranker-v2-m3",
"data": [
{
"index": 0,
"object": "score",
"score": 1
}
],
"usage": {}
}
2024-10-24 01:05:49 -04:00
```
2024-12-14 00:22:22 +08:00
#### Batch inference
2024-11-16 13:35:40 +08:00
2024-12-14 00:22:22 +08:00
You can pass a string to `text_1` and a list to `text_2` , forming multiple sentence pairs
where each pair is built from `text_1` and a string in `text_2` .
The total number of pairs is `len(text_2)` .
2024-11-16 13:35:40 +08:00
2024-12-14 00:22:22 +08:00
Request:
2024-10-24 01:05:49 -04:00
2024-12-14 00:22:22 +08:00
```bash
curl -X 'POST' \
'http://127.0.0.1:8000/score' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "BAAI/bge-reranker-v2-m3",
"text_1": "What is the capital of France?",
"text_2": [
"The capital of Brazil is Brasilia.",
"The capital of France is Paris."
]
}'
```
2024-03-18 22:05:34 -07:00
2024-12-14 00:22:22 +08:00
Response:
```bash
{
"id": "score-request-id",
"object": "list",
"created": 693570,
"model": "BAAI/bge-reranker-v2-m3",
"data": [
{
"index": 0,
"object": "score",
"score": 0.001094818115234375
},
{
"index": 1,
"object": "score",
"score": 1
}
],
"usage": {}
}
2024-06-04 01:25:29 +02:00
```
2024-09-04 15:18:13 -05:00
2024-12-14 00:22:22 +08:00
You can pass a list to both `text_1` and `text_2` , forming multiple sentence pairs
where each pair is built from a string in `text_1` and the corresponding string in `text_2` (similar to `zip()` ).
The total number of pairs is `len(text_2)` .
Request:
```bash
curl -X 'POST' \
'http://127.0.0.1:8000/score' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "BAAI/bge-reranker-v2-m3",
"encoding_format": "float",
"text_1": [
"What is the capital of Brazil?",
"What is the capital of France?"
],
"text_2": [
"The capital of Brazil is Brasilia.",
"The capital of France is Paris."
]
}'
```
2024-06-04 01:25:29 +02:00
2024-12-14 00:22:22 +08:00
Response:
2024-08-30 08:21:02 -07:00
2024-12-14 00:22:22 +08:00
```bash
{
"id": "score-request-id",
"object": "list",
"created": 693447,
"model": "BAAI/bge-reranker-v2-m3",
"data": [
{
"index": 0,
"object": "score",
"score": 1
},
{
"index": 1,
"object": "score",
"score": 1
}
],
"usage": {}
}
```
2024-08-30 08:21:02 -07:00
2024-12-14 00:22:22 +08:00
#### Extra parameters
2024-08-30 08:21:02 -07:00
2025-01-08 21:34:44 +08:00
The following [pooling parameters ](#pooling-params ) are supported.
2024-08-30 08:21:02 -07:00
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-12-14 00:22:22 +08:00
:language: python
:start-after: begin-score-pooling-params
:end-before: end-score-pooling-params
2025-01-29 03:38:29 +00:00
:::
2024-08-30 08:21:02 -07:00
2024-12-14 00:22:22 +08:00
The following extra parameters are supported:
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2024-12-14 00:22:22 +08:00
:language: python
:start-after: begin-score-extra-params
:end-before: end-score-extra-params
2025-01-29 03:38:29 +00:00
:::
2025-01-26 20:58:45 -06:00
(rerank-api)=
### Re-rank API
2025-02-21 03:09:47 -03:00
Our Re-rank API can apply an embedding model or a cross-encoder model to predict relevant scores between a single query, and
2025-01-26 20:58:45 -06:00
each of a list of documents. Usually, the score for a sentence pair refers to the similarity between two sentences, on
a scale of 0 to 1.
2025-02-21 03:09:47 -03:00
You can find the documentation for cross encoder models at [sbert.net ](https://www.sbert.net/docs/package_reference/cross_encoder/cross_encoder.html ).
2025-01-26 20:58:45 -06:00
The rerank endpoints support popular re-rank models such as `BAAI/bge-reranker-base` and other models supporting the
`score` task. Additionally, `/rerank` , `/v1/rerank` , and `/v2/rerank`
endpoints are compatible with both [Jina AI's re-rank API interface ](https://jina.ai/reranker/ ) and
[Cohere's re-rank API interface ](https://docs.cohere.com/v2/reference/rerank ) to ensure compatibility with
popular open-source tools.
Code example: < gh-file:examples / online_serving / jinaai_rerank_client . py >
#### Example Request
Note that the `top_n` request parameter is optional and will default to the length of the `documents` field.
Result documents will be sorted by relevance, and the `index` property can be used to determine original order.
Request:
```bash
curl -X 'POST' \
'http://127.0.0.1:8000/v1/rerank' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "BAAI/bge-reranker-base",
"query": "What is the capital of France?",
"documents": [
"The capital of Brazil is Brasilia.",
"The capital of France is Paris.",
"Horses and cows are both animals"
]
}'
```
Response:
```bash
{
"id": "rerank-fae51b2b664d4ed38f5969b612edff77",
"model": "BAAI/bge-reranker-base",
"usage": {
"total_tokens": 56
},
"results": [
{
"index": 1,
"document": {
"text": "The capital of France is Paris."
},
"relevance_score": 0.99853515625
},
{
"index": 0,
"document": {
"text": "The capital of Brazil is Brasilia."
},
"relevance_score": 0.0005860328674316406
}
]
}
```
#### Extra parameters
The following [pooling parameters ](#pooling-params ) are supported.
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2025-01-26 20:58:45 -06:00
:language: python
:start-after: begin-rerank-pooling-params
:end-before: end-rerank-pooling-params
2025-01-29 03:38:29 +00:00
:::
2025-01-26 20:58:45 -06:00
The following extra parameters are supported:
2025-01-29 03:38:29 +00:00
:::{literalinclude} ../../../vllm/entrypoints/openai/protocol.py
2025-01-26 20:58:45 -06:00
:language: python
:start-after: begin-rerank-extra-params
:end-before: end-rerank-extra-params
2025-01-29 03:38:29 +00:00
:::