
Signed-off-by: mgoin <michael@neuralmagic.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
11 KiB
OpenAI Compatible Server
vLLM provides an HTTP server that implements OpenAI's Completions and Chat API.
You can start the server using Python, or using Docker:
vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123
To call the server, you can use the official OpenAI Python client library, or any other HTTP client.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123",
)
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
API Reference
We currently support the following OpenAI APIs:
- Completions API
- Note:
suffix
parameter is not supported.
- Note:
- Chat Completions API
- Vision-related parameters are supported; see Multimodal Inputs.
- Note:
image_url.detail
parameter is not supported.
- Note:
- We also support
audio_url
content type for audio files.- Refer to vllm.entrypoints.chat_utils for the exact schema.
- TODO: Support
input_audio
content type as defined here.
- Note:
parallel_tool_calls
anduser
parameters are ignored.
- Vision-related parameters are supported; see Multimodal Inputs.
- Embeddings API
- Instead of
inputs
, you can pass in a list ofmessages
(same schema as Chat Completions API), which will be treated as a single prompt to the model according to its chat template.- This enables multi-modal inputs to be passed to embedding models, see this page for details.
- Note: You should run
vllm serve
with--task embedding
to ensure that the model is being run in embedding mode.
- Instead of
Score API for Cross Encoder Models
vLLM supports cross encoders models at the /v1/score endpoint, which is not an OpenAI API standard endpoint. You can find the documentation for these kind of models at sbert.net.
A Cross Encoder takes exactly two sentences / texts as input and either predicts a score or label for this sentence pair. It can for example predict the similarity of the sentence pair on a scale of 0 … 1.
Example of usage for a pair of a string and a list of texts
In this case, the model will compare the first given text to each of the texts containing the list.
curl -X 'POST' \
'http://127.0.0.1:8000/v1/score' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "BAAI/bge-reranker-v2-m3",
"text_1": "What is the capital of France?",
"text_2": [
"The capital of Brazil is Brasilia.",
"The capital of France is Paris."
]
}'
Response:
{
"id": "score-request-id",
"object": "list",
"created": 693570,
"model": "BAAI/bge-reranker-v2-m3",
"data": [
{
"index": 0,
"object": "score",
"score": [
0.001094818115234375
]
},
{
"index": 1,
"object": "score",
"score": [
1
]
}
],
"usage": {}
}
Example of usage for a pair of two lists of texts
In this case, the model will compare the one by one, making pairs by same index correspondent in each list.
curl -X 'POST' \
'http://127.0.0.1:8000/v1/score' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "BAAI/bge-reranker-v2-m3",
"encoding_format": "float",
"text_1": [
"What is the capital of Brazil?",
"What is the capital of France?"
],
"text_2": [
"The capital of Brazil is Brasilia.",
"The capital of France is Paris."
]
}'
Response:
{
"id": "score-request-id",
"object": "list",
"created": 693447,
"model": "BAAI/bge-reranker-v2-m3",
"data": [
{
"index": 0,
"object": "score",
"score": [
1
]
},
{
"index": 1,
"object": "score",
"score": [
1
]
}
],
"usage": {}
}
Example of usage for a pair of two strings
In this case, the model will compare the strings of texts.
curl -X 'POST' \
'http://127.0.0.1:8000/v1/score' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "BAAI/bge-reranker-v2-m3",
"encoding_format": "float",
"text_1": "What is the capital of France?",
"text_2": "The capital of France is Paris."
}'
Response:
{
"id": "score-request-id",
"object": "list",
"created": 693447,
"model": "BAAI/bge-reranker-v2-m3",
"data": [
{
"index": 0,
"object": "score",
"score": [
1
]
}
],
"usage": {}
}
Extra Parameters
vLLM supports a set of parameters that are not part of the OpenAI API. In order to use them, you can pass them as extra parameters in the OpenAI client. Or directly merge them into the JSON payload if you are using HTTP call directly.
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
],
extra_body={
"guided_choice": ["positive", "negative"]
}
)
Extra HTTP Headers
Only X-Request-Id
HTTP request header is supported for now.
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
],
extra_headers={
"x-request-id": "sentiment-classification-00001",
}
)
print(completion._request_id)
completion = client.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
prompt="A robot may not injure a human being",
extra_headers={
"x-request-id": "completion-test",
}
)
print(completion._request_id)
Extra Parameters for Completions API
The following sampling parameters (click through to see documentation) are supported.
:language: python
:start-after: begin-completion-sampling-params
:end-before: end-completion-sampling-params
The following extra parameters are supported:
:language: python
:start-after: begin-completion-extra-params
:end-before: end-completion-extra-params
Extra Parameters for Chat Completions API
The following sampling parameters (click through to see documentation) are supported.
:language: python
:start-after: begin-chat-completion-sampling-params
:end-before: end-chat-completion-sampling-params
The following extra parameters are supported:
:language: python
:start-after: begin-chat-completion-extra-params
:end-before: end-chat-completion-extra-params
Extra Parameters for Embeddings API
The following pooling parameters (click through to see documentation) are supported.
:language: python
:start-after: begin-embedding-pooling-params
:end-before: end-embedding-pooling-params
The following extra parameters are supported:
:language: python
:start-after: begin-embedding-extra-params
:end-before: end-embedding-extra-params
Chat Template
In order for the language model to support chat protocol, vLLM requires the model to include a chat template in its tokenizer configuration. The chat template is a Jinja2 template that specifies how are roles, messages, and other chat-specific tokens are encoded in the input.
An example chat template for NousResearch/Meta-Llama-3-8B-Instruct
can be found here
Some models do not provide a chat template even though they are instruction/chat fine-tuned. For those model,
you can manually specify their chat template in the --chat-template
parameter with the file path to the chat
template, or the template in string form. Without a chat template, the server will not be able to process chat
and all chat requests will error.
vllm serve <model> --chat-template ./path-to-chat-template.jinja
vLLM community provides a set of chat templates for popular models. You can find them in the examples directory here
With the inclusion of multi-modal chat APIs, the OpenAI spec now accepts chat messages in a new format which specifies
both a type
and a text
field. An example is provided below:
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": [{"type": "text", "text": "Classify this sentiment: vLLM is wonderful!"}]}
]
)
Most chat templates for LLMs expect the content
field to be a string, but there are some newer models like
meta-llama/Llama-Guard-3-1B
that expect the content to be formatted according to the OpenAI schema in the
request. vLLM provides best-effort support to detect this automatically, which is logged as a string like
"Detected the chat template content format to be...", and internally converts incoming requests to match
the detected format, which can be one of:
"string"
: A string.- Example:
"Hello world"
- Example:
"openai"
: A list of dictionaries, similar to OpenAI schema.- Example:
[{"type": "text", "text": "Hello world!"}]
- Example:
If the result is not what you expect, you can set the --chat-template-content-format
CLI argument
to override which format to use.
Command line arguments for the server
:module: vllm.entrypoints.openai.cli_args
:func: create_parser_for_docs
:prog: vllm serve
Config file
The serve
module can also accept arguments from a config file in
yaml
format. The arguments in the yaml must be specified using the
long form of the argument outlined here:
For example:
# config.yaml
host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"
$ vllm serve SOME_MODEL --config config.yaml
NOTE
In case an argument is supplied simultaneously using command line and the config file, the value from the commandline will take precedence.
The order of priorities is command line > config file values > defaults
.