vllm/docs/source/serving/openai_compatible_server.md
Kaunil Dhruv 058344f89a
[Frontend]-config-cli-args (#7737)
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Kaunil Dhruv <kaunil_dhruv@intuit.com>
2024-08-30 08:21:02 -07:00

5.4 KiB

OpenAI Compatible Server

vLLM provides an HTTP server that implements OpenAI's Completions and Chat API.

You can start the server using Python, or using Docker:

vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123

To call the server, you can use the official OpenAI Python client library, or any other HTTP client.

from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

API Reference

Please see the OpenAI API Reference for more information on the API. We support all parameters except:

  • Chat: tools, and tool_choice.
  • Completions: suffix.

vLLM also provides experimental support for OpenAI Vision API compatible inference. See more details in Using VLMs.

Extra Parameters

vLLM supports a set of parameters that are not part of the OpenAI API. In order to use them, you can pass them as extra parameters in the OpenAI client. Or directly merge them into the JSON payload if you are using HTTP call directly.

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
  ],
  extra_body={
    "guided_choice": ["positive", "negative"]
  }
)

Extra Parameters for Chat API

The following sampling parameters (click through to see documentation) are supported.

:language: python
:start-after: begin-chat-completion-sampling-params
:end-before: end-chat-completion-sampling-params

The following extra parameters are supported:

:language: python
:start-after: begin-chat-completion-extra-params
:end-before: end-chat-completion-extra-params

Extra Parameters for Completions API

The following sampling parameters (click through to see documentation) are supported.

:language: python
:start-after: begin-completion-sampling-params
:end-before: end-completion-sampling-params

The following extra parameters are supported:

:language: python
:start-after: begin-completion-extra-params
:end-before: end-completion-extra-params

Chat Template

In order for the language model to support chat protocol, vLLM requires the model to include a chat template in its tokenizer configuration. The chat template is a Jinja2 template that specifies how are roles, messages, and other chat-specific tokens are encoded in the input.

An example chat template for NousResearch/Meta-Llama-3-8B-Instruct can be found here

Some models do not provide a chat template even though they are instruction/chat fine-tuned. For those model, you can manually specify their chat template in the --chat-template parameter with the file path to the chat template, or the template in string form. Without a chat template, the server will not be able to process chat and all chat requests will error.

vllm serve <model> --chat-template ./path-to-chat-template.jinja

vLLM community provides a set of chat templates for popular models. You can find them in the examples directory here

Command line arguments for the server

:module: vllm.entrypoints.openai.cli_args
:func: create_parser_for_docs
:prog: vllm serve

Config file

The serve module can also accept arguments from a config file in yaml format. The arguments in the yaml must be specified using the long form of the argument outlined here:

For example:

# config.yaml

host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"
$ vllm serve SOME_MODEL --config config.yaml

NOTE
In case an argument is supplied using command line and the config file, the value from the commandline will take precedence. The order of priorities is command line > config file values > defaults.


Tool calling in the chat completion API

vLLM supports only named function calling in the chat completion API. The tool_choice options auto and required are not yet supported but on the roadmap.

To use a named function you need to define the function in the tools parameter and call it in the tool_choice parameter.

It is the callers responsibility to prompt the model with the tool information, vLLM will not automatically manipulate the prompt. This may change in the future.

vLLM will use guided decoding to ensure the response matches the tool parameter object defined by the JSON schema in the tools parameter.

Please refer to the OpenAI API reference documentation for more information.