Cyrus Leung 12664ddda5
[Doc] [1/N] Initial guide for merged multi-modal processor (#11925)
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-01-10 14:30:25 +00:00

610 B

(multi-modality)=

Multi-Modality

vLLM provides experimental support for multi-modal models through the {mod}vllm.multimodal package.

Multi-modal inputs can be passed alongside text and token prompts to supported models via the multi_modal_data field in {class}vllm.inputs.PromptType.

Looking to add your own multi-modal model? Please follow the instructions listed here.

Module Contents

.. autodata:: vllm.multimodal.MULTIMODAL_REGISTRY

Submodules

:maxdepth: 1

inputs
parse
processing
profiling
registry