vllm/docs/source/dev/multimodal/multimodal_index.rst

65 lines
1.4 KiB
ReStructuredText
Raw Normal View History

.. _multi_modality:
2024-06-03 13:56:41 +08:00
Multi-Modality
==============
.. currentmodule:: vllm.multimodal
vLLM provides experimental support for multi-modal models through the :mod:`vllm.multimodal` package.
Multi-modal inputs can be passed alongside text and token prompts to :ref:`supported models <supported_vlms>`
via the ``multi_modal_data`` field in :class:`vllm.inputs.PromptStrictInputs`.
2024-06-03 13:56:41 +08:00
Currently, vLLM only has built-in support for image data. You can extend vLLM to process additional modalities
by following :ref:`this guide <adding_multimodal_plugin>`.
Looking to add your own multi-modal model? Please follow the instructions listed :ref:`here <enabling_multimodal_inputs>`.
Guides
++++++
.. toctree::
:maxdepth: 1
adding_multimodal_plugin
2024-06-03 13:56:41 +08:00
Module Contents
+++++++++++++++
.. automodule:: vllm.multimodal
Registry
--------
.. autodata:: vllm.multimodal.MULTIMODAL_REGISTRY
2024-06-03 13:56:41 +08:00
.. autoclass:: vllm.multimodal.MultiModalRegistry
:members:
:show-inheritance:
Base Classes
------------
.. autodata:: vllm.multimodal.BatchedTensors
.. autoclass:: vllm.multimodal.MultiModalDataBuiltins
2024-06-03 13:56:41 +08:00
:members:
:show-inheritance:
.. autodata:: vllm.multimodal.MultiModalDataDict
.. autoclass:: vllm.multimodal.MultiModalInputs
:members:
:show-inheritance:
2024-06-03 13:56:41 +08:00
.. autoclass:: vllm.multimodal.MultiModalPlugin
:members:
:show-inheritance:
Image Classes
-------------
.. automodule:: vllm.multimodal.image
:members:
:show-inheritance: