2024-12-23 17:35:38 -05:00
(multimodal-inputs)=
# Multimodal Inputs
This page teaches you how to pass multi-modal inputs to [multi-modal models ](#supported-mm-models ) in vLLM.
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-26 06:49:26 +08:00
We are actively iterating on multi-modal support. See [this RFC ](gh-issue:4194 ) for upcoming changes,
2024-12-23 17:35:38 -05:00
and [open an issue on GitHub ](https://github.com/vllm-project/vllm/issues/new/choose ) if you have any feedback or feature requests.
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
## Offline Inference
To input multi-modal data, follow this schema in {class}`vllm.inputs.PromptType` :
- `prompt` : The prompt should follow the format that is documented on HuggingFace.
2025-01-10 22:30:25 +08:00
- `multi_modal_data` : This is a dictionary that follows the schema defined in {class}`vllm.multimodal.inputs.MultiModalDataDict` .
2024-12-23 17:35:38 -05:00
2025-02-28 15:12:04 +08:00
### Image Inputs
2024-12-23 17:35:38 -05:00
2025-01-07 11:20:01 +08:00
You can pass a single image to the `'image'` field of the multi-modal dictionary, as shown in the following examples:
2024-12-23 17:35:38 -05:00
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
llm = LLM(model="llava-hf/llava-1.5-7b-hf")
# Refer to the HuggingFace repo for the correct format to use
prompt = "USER: < image > \nWhat is the content of this image?\nASSISTANT:"
# Load the image using PIL.Image
image = PIL.Image.open(...)
# Single prompt inference
outputs = llm.generate({
"prompt": prompt,
"multi_modal_data": {"image": image},
})
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
# Batch inference
image_1 = PIL.Image.open(...)
image_2 = PIL.Image.open(...)
outputs = llm.generate(
[
{
"prompt": "USER: < image > \nWhat is the content of this image?\nASSISTANT:",
"multi_modal_data": {"image": image_1},
},
{
"prompt": "USER: < image > \nWhat's the color of this image?\nASSISTANT:",
"multi_modal_data": {"image": image_2},
}
]
)
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
```
2025-01-10 15:50:29 +00:00
Full example: < gh-file:examples / offline_inference / vision_language . py >
2024-12-23 17:35:38 -05:00
To substitute multiple images inside the same text prompt, you can pass in a list of images instead:
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
llm = LLM(
model="microsoft/Phi-3.5-vision-instruct",
trust_remote_code=True, # Required to load Phi-3.5-vision
max_model_len=4096, # Otherwise, it may not fit in smaller GPUs
limit_mm_per_prompt={"image": 2}, # The maximum number to accept
)
# Refer to the HuggingFace repo for the correct format to use
prompt = "< |user|>\n< |image_1|>\n< |image_2|>\nWhat is the content of each image?< |end|>\n< |assistant|>\n"
# Load the images using PIL.Image
image1 = PIL.Image.open(...)
image2 = PIL.Image.open(...)
outputs = llm.generate({
"prompt": prompt,
"multi_modal_data": {
"image": [image1, image2]
},
})
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
```
2025-01-10 15:50:29 +00:00
Full example: < gh-file:examples / offline_inference / vision_language_multi_image . py >
2024-12-23 17:35:38 -05:00
Multi-image input can be extended to perform video captioning. We show this with [Qwen2-VL ](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct ) as it supports videos:
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
# Specify the maximum number of frames per video to be 4. This can be changed.
llm = LLM("Qwen/Qwen2-VL-2B-Instruct", limit_mm_per_prompt={"image": 4})
# Create the request payload.
video_frames = ... # load your video making sure it only has the number of frames specified earlier.
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe this set of frames. Consider the frames to be a part of the same video."},
],
}
for i in range(len(video_frames)):
base64_image = encode_image(video_frames[i]) # base64 encoding.
new_image = {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}}
message["content"].append(new_image)
# Perform inference and log output.
outputs = llm.chat([message])
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
```
2025-02-28 15:12:04 +08:00
### Video Inputs
2024-12-23 17:35:38 -05:00
2025-01-07 11:20:01 +08:00
You can pass a list of NumPy arrays directly to the `'video'` field of the multi-modal dictionary
2024-12-23 17:35:38 -05:00
instead of using multi-image input.
2025-01-10 15:50:29 +00:00
Full example: < gh-file:examples / offline_inference / vision_language . py >
2024-12-23 17:35:38 -05:00
2025-02-28 15:12:04 +08:00
### Audio Inputs
2024-12-23 17:35:38 -05:00
2025-01-07 11:20:01 +08:00
You can pass a tuple `(array, sampling_rate)` to the `'audio'` field of the multi-modal dictionary.
2024-12-23 17:35:38 -05:00
2025-01-10 15:50:29 +00:00
Full example: < gh-file:examples / offline_inference / audio_language . py >
2024-12-23 17:35:38 -05:00
2025-02-28 15:12:04 +08:00
### Embedding Inputs
2024-12-23 17:35:38 -05:00
To input pre-computed embeddings belonging to a data type (i.e. image, video, or audio) directly to the language model,
2025-01-07 11:20:01 +08:00
pass a tensor of shape `(num_items, feature_size, hidden_size of LM)` to the corresponding field of the multi-modal dictionary.
2024-12-23 17:35:38 -05:00
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
# Inference with image embeddings as input
llm = LLM(model="llava-hf/llava-1.5-7b-hf")
# Refer to the HuggingFace repo for the correct format to use
prompt = "USER: < image > \nWhat is the content of this image?\nASSISTANT:"
# Embeddings for single image
# torch.Tensor of shape (1, image_feature_size, hidden_size of LM)
image_embeds = torch.load(...)
outputs = llm.generate({
"prompt": prompt,
"multi_modal_data": {"image": image_embeds},
})
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
```
For Qwen2-VL and MiniCPM-V, we accept additional parameters alongside the embeddings:
```python
# Construct the prompt based on your model
prompt = ...
# Embeddings for multiple images
# torch.Tensor of shape (num_images, image_feature_size, hidden_size of LM)
image_embeds = torch.load(...)
# Qwen2-VL
llm = LLM("Qwen/Qwen2-VL-2B-Instruct", limit_mm_per_prompt={"image": 4})
mm_data = {
"image": {
"image_embeds": image_embeds,
# image_grid_thw is needed to calculate positional encoding.
"image_grid_thw": torch.load(...), # torch.Tensor of shape (1, 3),
}
}
# MiniCPM-V
llm = LLM("openbmb/MiniCPM-V-2_6", trust_remote_code=True, limit_mm_per_prompt={"image": 4})
mm_data = {
"image": {
"image_embeds": image_embeds,
2025-02-17 17:36:07 +08:00
# image_sizes is needed to calculate details of the sliced image.
"image_sizes": [image.size for image in images], # list of image sizes
2024-12-23 17:35:38 -05:00
}
}
outputs = llm.generate({
"prompt": prompt,
"multi_modal_data": mm_data,
})
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
```
2025-01-10 12:05:56 +00:00
## Online Serving
2024-12-23 17:35:38 -05:00
Our OpenAI-compatible server accepts multi-modal data via the [Chat Completions API ](https://platform.openai.com/docs/api-reference/chat ).
2025-01-29 03:38:29 +00:00
:::{important}
2024-12-23 17:35:38 -05:00
A chat template is **required** to use Chat Completions API.
Although most models come with a chat template, for others you have to define one yourself.
The chat template can be inferred based on the documentation on the model's HuggingFace repo.
2024-12-26 06:49:26 +08:00
For example, LLaVA-1.5 (`llava-hf/llava-1.5-7b-hf` ) requires a chat template that can be found here: < gh-file:examples / template_llava . jinja >
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-02-28 15:12:04 +08:00
### Image Inputs
2024-12-23 17:35:38 -05:00
Image input is supported according to [OpenAI Vision API ](https://platform.openai.com/docs/guides/vision ).
Here is a simple example using Phi-3.5-Vision.
First, launch the OpenAI-compatible server:
```bash
vllm serve microsoft/Phi-3.5-vision-instruct --task generate \
2025-04-18 06:13:32 +01:00
--trust-remote-code --max-model-len 4096 --limit-mm-per-prompt '{"image":2}'
2024-12-23 17:35:38 -05:00
```
Then, you can use the OpenAI client as follows:
```python
from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
# Single-image input inference
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
chat_response = client.chat.completions.create(
model="microsoft/Phi-3.5-vision-instruct",
messages=[{
"role": "user",
"content": [
# NOTE: The prompt formatting with the image token `<image>` is not needed
# since the prompt will be processed automatically by the API server.
{"type": "text", "text": "What’ s in this image?"},
{"type": "image_url", "image_url": {"url": image_url}},
],
}],
)
print("Chat completion output:", chat_response.choices[0].message.content)
# Multi-image input inference
image_url_duck = "https://upload.wikimedia.org/wikipedia/commons/d/da/2015_Kaczka_krzy%C5%BCowka_w_wodzie_%28samiec%29.jpg"
image_url_lion = "https://upload.wikimedia.org/wikipedia/commons/7/77/002_The_lion_king_Snyggve_in_the_Serengeti_National_Park_Photo_by_Giles_Laurent.jpg"
chat_response = client.chat.completions.create(
model="microsoft/Phi-3.5-vision-instruct",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What are the animals in these images?"},
{"type": "image_url", "image_url": {"url": image_url_duck}},
{"type": "image_url", "image_url": {"url": image_url_lion}},
],
}],
)
print("Chat completion output:", chat_response.choices[0].message.content)
```
2025-01-08 13:09:53 +00:00
Full example: < gh-file:examples / online_serving / openai_chat_completion_client_for_multimodal . py >
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{tip}
2024-12-23 17:35:38 -05:00
Loading from local file paths is also supported on vLLM: You can specify the allowed local media path via `--allowed-local-media-path` when launching the API server/engine,
and pass the file path as `url` in the API request.
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{tip}
2024-12-23 17:35:38 -05:00
There is no need to place image placeholders in the text content of the API request - they are already represented by the image content.
In fact, you can place image placeholders in the middle of the text by interleaving text and image content.
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-23 17:35:38 -05:00
By default, the timeout for fetching images through HTTP URL is `5` seconds.
You can override this by setting the environment variable:
```console
2025-01-29 03:38:29 +00:00
export VLLM_IMAGE_FETCH_TIMEOUT=< timeout >
2024-12-23 17:35:38 -05:00
```
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-02-28 15:12:04 +08:00
### Video Inputs
2024-12-23 17:35:38 -05:00
2025-01-07 11:20:01 +08:00
Instead of `image_url` , you can pass a video file via `video_url` . Here is a simple example using [LLaVA-OneVision ](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf ).
2024-12-23 17:35:38 -05:00
2024-12-27 01:31:29 +08:00
First, launch the OpenAI-compatible server:
```bash
vllm serve llava-hf/llava-onevision-qwen2-0.5b-ov-hf --task generate --max-model-len 8192
```
Then, you can use the OpenAI client as follows:
2025-01-12 03:17:13 -05:00
2024-12-27 01:31:29 +08:00
```python
from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
video_url = "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerFun.mp4"
## Use video url in the payload
chat_completion_from_url = client.chat.completions.create(
messages=[{
"role":
"user",
"content": [
{
"type": "text",
"text": "What's in this video?"
},
{
"type": "video_url",
"video_url": {
"url": video_url
},
},
],
}],
model=model,
max_completion_tokens=64,
)
result = chat_completion_from_url.choices[0].message.content
print("Chat completion output from image url:", result)
```
2025-01-08 13:09:53 +00:00
Full example: < gh-file:examples / online_serving / openai_chat_completion_client_for_multimodal . py >
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-27 01:31:29 +08:00
By default, the timeout for fetching videos through HTTP URL is `30` seconds.
2024-12-23 17:35:38 -05:00
You can override this by setting the environment variable:
```console
2025-01-29 03:38:29 +00:00
export VLLM_VIDEO_FETCH_TIMEOUT=< timeout >
2024-12-23 17:35:38 -05:00
```
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-02-28 15:12:04 +08:00
### Audio Inputs
2024-12-23 17:35:38 -05:00
Audio input is supported according to [OpenAI Audio API ](https://platform.openai.com/docs/guides/audio?audio-generation-quickstart-example=audio-in ).
2025-02-10 14:02:48 -08:00
Here is a simple example using Ultravox-v0.5-1B.
2024-12-23 17:35:38 -05:00
First, launch the OpenAI-compatible server:
```bash
2025-02-10 14:02:48 -08:00
vllm serve fixie-ai/ultravox-v0_5-llama-3_2-1b
2024-12-23 17:35:38 -05:00
```
Then, you can use the OpenAI client as follows:
```python
import base64
import requests
from openai import OpenAI
from vllm.assets.audio import AudioAsset
def encode_base64_content_from_url(content_url: str) -> str:
"""Encode a content retrieved from a remote url to base64 format."""
with requests.get(content_url) as response:
response.raise_for_status()
result = base64.b64encode(response.content).decode('utf-8')
return result
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
# Any format supported by librosa is supported
audio_url = AudioAsset("winning_call").url
audio_base64 = encode_base64_content_from_url(audio_url)
chat_completion_from_base64 = client.chat.completions.create(
messages=[{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this audio?"
},
{
"type": "input_audio",
"input_audio": {
"data": audio_base64,
"format": "wav"
},
},
],
}],
model=model,
max_completion_tokens=64,
)
result = chat_completion_from_base64.choices[0].message.content
print("Chat completion output from input audio:", result)
```
2025-01-07 11:20:01 +08:00
Alternatively, you can pass `audio_url` , which is the audio counterpart of `image_url` for image input:
2024-12-23 17:35:38 -05:00
```python
chat_completion_from_url = client.chat.completions.create(
messages=[{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this audio?"
},
{
"type": "audio_url",
"audio_url": {
"url": audio_url
},
},
],
}],
model=model,
max_completion_tokens=64,
)
result = chat_completion_from_url.choices[0].message.content
print("Chat completion output from audio url:", result)
```
2025-01-08 13:09:53 +00:00
Full example: < gh-file:examples / online_serving / openai_chat_completion_client_for_multimodal . py >
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-23 17:35:38 -05:00
By default, the timeout for fetching audios through HTTP URL is `10` seconds.
You can override this by setting the environment variable:
```console
2025-01-29 03:38:29 +00:00
export VLLM_AUDIO_FETCH_TIMEOUT=< timeout >
2024-12-23 17:35:38 -05:00
```
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-02-28 15:12:04 +08:00
### Embedding Inputs
2024-12-23 17:35:38 -05:00
2025-03-10 20:36:03 +08:00
To input pre-computed embeddings belonging to a data type (i.e. image, video, or audio) directly to the language model,
pass a tensor of shape to the corresponding field of the multi-modal dictionary.
#### Image Embedding Inputs
For image embeddings, you can pass the base64-encoded tensor to the `image_embeds` field.
The following example demonstrates how to pass image embeddings to the OpenAI server:
```python
image_embedding = torch.load(...)
grid_thw = torch.load(...) # Required by Qwen/Qwen2-VL-2B-Instruct
buffer = io.BytesIO()
torch.save(image_embedding, buffer)
buffer.seek(0)
binary_data = buffer.read()
base64_image_embedding = base64.b64encode(binary_data).decode('utf-8')
client = OpenAI(
# defaults to os.environ.get("OPENAI_API_KEY")
api_key=openai_api_key,
base_url=openai_api_base,
)
# Basic usage - this is equivalent to the LLaVA example for offline inference
model = "llava-hf/llava-1.5-7b-hf"
embeds = {
"type": "image_embeds",
"image_embeds": f"{base64_image_embedding}"
}
# Pass additional parameters (available to Qwen2-VL and MiniCPM-V)
model = "Qwen/Qwen2-VL-2B-Instruct"
embeds = {
"type": "image_embeds",
"image_embeds": {
"image_embeds": f"{base64_image_embedding}" , # Required
"image_grid_thw": f"{base64_image_grid_thw}" # Required by Qwen/Qwen2-VL-2B-Instruct
},
}
model = "openbmb/MiniCPM-V-2_6"
embeds = {
"type": "image_embeds",
"image_embeds": {
"image_embeds": f"{base64_image_embedding}" , # Required
"image_sizes": f"{base64_image_sizes}" # Required by openbmb/MiniCPM-V-2_6
},
}
chat_completion = client.chat.completions.create(
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": [
{
"type": "text",
"text": "What's in this image?",
},
embeds,
],
},
],
model=model,
)
```
:::{note}
Only one message can contain `{"type": "image_embeds"}` .
If used with a model that requires additional parameters, you must also provide a tensor for each of them, e.g. `image_grid_thw` , `image_sizes` , etc.
:::