vllm/docs/source/serving/openai_compatible_server.md
Kyle Mistele e02ce498be
[Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (#5649)
Co-authored-by: constellate <constellate@1-ai-appserver-staging.codereach.com>
Co-authored-by: Kyle Mistele <kyle@constellate.ai>
2024-09-04 13:18:13 -07:00

8.5 KiB

OpenAI Compatible Server

vLLM provides an HTTP server that implements OpenAI's Completions and Chat API.

You can start the server using Python, or using Docker:

vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123

To call the server, you can use the official OpenAI Python client library, or any other HTTP client.

from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

API Reference

Please see the OpenAI API Reference for more information on the API. We support all parameters except:

  • Chat: tools, and tool_choice.
  • Completions: suffix.

vLLM also provides experimental support for OpenAI Vision API compatible inference. See more details in Using VLMs.

Extra Parameters

vLLM supports a set of parameters that are not part of the OpenAI API. In order to use them, you can pass them as extra parameters in the OpenAI client. Or directly merge them into the JSON payload if you are using HTTP call directly.

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
  ],
  extra_body={
    "guided_choice": ["positive", "negative"]
  }
)

Extra Parameters for Chat API

The following sampling parameters (click through to see documentation) are supported.

:language: python
:start-after: begin-chat-completion-sampling-params
:end-before: end-chat-completion-sampling-params

The following extra parameters are supported:

:language: python
:start-after: begin-chat-completion-extra-params
:end-before: end-chat-completion-extra-params

Extra Parameters for Completions API

The following sampling parameters (click through to see documentation) are supported.

:language: python
:start-after: begin-completion-sampling-params
:end-before: end-completion-sampling-params

The following extra parameters are supported:

:language: python
:start-after: begin-completion-extra-params
:end-before: end-completion-extra-params

Chat Template

In order for the language model to support chat protocol, vLLM requires the model to include a chat template in its tokenizer configuration. The chat template is a Jinja2 template that specifies how are roles, messages, and other chat-specific tokens are encoded in the input.

An example chat template for NousResearch/Meta-Llama-3-8B-Instruct can be found here

Some models do not provide a chat template even though they are instruction/chat fine-tuned. For those model, you can manually specify their chat template in the --chat-template parameter with the file path to the chat template, or the template in string form. Without a chat template, the server will not be able to process chat and all chat requests will error.

vllm serve <model> --chat-template ./path-to-chat-template.jinja

vLLM community provides a set of chat templates for popular models. You can find them in the examples directory here

Command line arguments for the server

:module: vllm.entrypoints.openai.cli_args
:func: create_parser_for_docs
:prog: vllm serve

Tool Calling in the Chat Completion API

Named Function Calling

vLLM supports only named function calling in the chat completion API by default. It does so using Outlines, so this is enabled by default, and will work with any supported model. You are guaranteed a validly-parsable function call - not a high-quality one.

To use a named function, you need to define the functions in the tools parameter of the chat completion request, and specify the name of one of the tools in the tool_choice parameter of the chat completion request.

Config file

The serve module can also accept arguments from a config file in yaml format. The arguments in the yaml must be specified using the long form of the argument outlined here:

For example:

# config.yaml

host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"
$ vllm serve SOME_MODEL --config config.yaml

NOTE
In case an argument is supplied using command line and the config file, the value from the commandline will take precedence. The order of priorities is command line > config file values > defaults.


Tool calling in the chat completion API

vLLM supports only named function calling in the chat completion API. The tool_choice options auto and required are not yet supported but on the roadmap.

It is the callers responsibility to prompt the model with the tool information, vLLM will not automatically manipulate the prompt.

vLLM will use guided decoding to ensure the response matches the tool parameter object defined by the JSON schema in the tools parameter.

Automatic Function Calling

To enable this feature, you should set the following flags:

  • --enable-auto-tool-choice -- mandatory Auto tool choice. tells vLLM that you want to enable the model to generate its own tool calls when it deems appropriate.
  • --tool-call-parser -- select the tool parser to use - currently either hermes or mistral. Additional tool parsers will continue to be added in the future.
  • --chat-template -- optional for auto tool choice. the path to the chat template which handles tool-role messages and assistant-role messages that contain previously generated tool calls. Hermes and Mistral models have tool-compatible chat templates in their tokenizer_config.json files, but you can specify a custom template. This argument can be set to tool_use if your model has a tool use-specific chat template configured in the tokenizer_config.json. In this case, it will be used per the transformers specification. More on this here from HuggingFace; and you can find an example of this in a tokenizer_config.json here

If your favorite tool-calling model is not supported, please feel free to contribute a parser & tool use chat template!

Hermes Models

All Nous Research Hermes-series models newer than Hermes 2 Pro should be supported.

  • NousResearch/Hermes-2-Pro-*
  • NousResearch/Hermes-2-Theta-*
  • NousResearch/Hermes-3-*

Note that the Hermes 2 Theta models are known to have degraded tool call quality & capabilities due to the merge step in their creation.

Flags: --tool-call-parser hermes

Mistral Models

Supported models:

  • mistralai/Mistral-7B-Instruct-v0.3 (confirmed)
  • Additional mistral function-calling models are compatible as well.

Known issues:

  1. Mistral 7B struggles to generate parallel tool calls correctly.
  2. Mistral's tokenizer_config.json chat template requires tool call IDs that are exactly 9 digits, which is much shorter than what vLLM generates. Since an exception is thrown when this condition is not met, the following additional chat templates are provided:
  • examples/tool_chat_template_mistral.jinja - this is the "official" Mistral chat template, but tweaked so that it works with vLLM's tool call IDs (provided tool_call_id fields are truncated to the last 9 digits)
  • examples/tool_chat_template_mistral_parallel.jinja - this is a "better" version that adds a tool-use system prompt when tools are provided, that results in much better reliability when working with parallel tool calling.

Recommended flags: --tool-call-parser mistral --chat-template examples/tool_chat_template_mistral_parallel.jinja