2024-12-23 17:35:38 -05:00
|
|
|
(multi-modality)=
|
2024-07-03 11:34:00 +08:00
|
|
|
|
2024-12-23 17:35:38 -05:00
|
|
|
# Multi-Modality
|
2024-06-03 13:56:41 +08:00
|
|
|
|
2024-12-23 17:35:38 -05:00
|
|
|
vLLM provides experimental support for multi-modal models through the {mod}`vllm.multimodal` package.
|
|
|
|
|
|
|
|
Multi-modal inputs can be passed alongside text and token prompts to [supported models](#supported-mm-models)
|
|
|
|
via the `multi_modal_data` field in {class}`vllm.inputs.PromptType`.
|
2024-06-03 13:56:41 +08:00
|
|
|
|
2025-01-10 22:30:25 +08:00
|
|
|
Looking to add your own multi-modal model? Please follow the instructions listed [here](#supports-multimodal).
|
2024-07-02 00:57:09 -07:00
|
|
|
|
2024-12-23 17:35:38 -05:00
|
|
|
## Module Contents
|
2024-06-03 13:56:41 +08:00
|
|
|
|
2024-12-23 17:35:38 -05:00
|
|
|
```{eval-rst}
|
2024-06-28 20:09:56 +08:00
|
|
|
.. autodata:: vllm.multimodal.MULTIMODAL_REGISTRY
|
2024-12-23 17:35:38 -05:00
|
|
|
```
|
2024-06-03 13:56:41 +08:00
|
|
|
|
2025-01-09 01:14:14 +08:00
|
|
|
## Submodules
|
2024-07-25 11:59:30 +08:00
|
|
|
|
2025-01-29 03:38:29 +00:00
|
|
|
:::{toctree}
|
2025-01-09 01:14:14 +08:00
|
|
|
:maxdepth: 1
|
2024-07-31 10:38:45 +08:00
|
|
|
|
2025-01-09 01:14:14 +08:00
|
|
|
inputs
|
|
|
|
parse
|
|
|
|
processing
|
|
|
|
profiling
|
|
|
|
registry
|
2025-01-29 03:38:29 +00:00
|
|
|
:::
|