[Doc][3/N] Reorganize Serving section (#11766)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
This commit is contained in:
Cyrus Leung 2025-01-07 11:20:01 +08:00 committed by GitHub
parent d93d2d74fd
commit 8ceffbf315
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
40 changed files with 248 additions and 133 deletions

View File

@ -77,7 +77,7 @@ pip install vllm
Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to learn more.
- [Installation](https://vllm.readthedocs.io/en/latest/getting_started/installation.html)
- [Quickstart](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html)
- [Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html)
- [List of Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html)
## Contributing

View File

Before

Width:  |  Height:  |  Size: 968 KiB

After

Width:  |  Height:  |  Size: 968 KiB

View File

@ -1,7 +1,7 @@
# Dockerfile
We provide a <gh-file:Dockerfile> to construct the image for running an OpenAI compatible server with vLLM.
More information about deploying with Docker can be found [here](../../serving/deploying_with_docker.md).
More information about deploying with Docker can be found [here](#deployment-docker).
Below is a visual representation of the multi-stage Dockerfile. The build graph contains the following nodes:

View File

@ -3,7 +3,7 @@
# Model Registration
vLLM relies on a model registry to determine how to run each model.
A list of pre-registered architectures can be found on the [Supported Models](#supported-models) page.
A list of pre-registered architectures can be found [here](#supported-models).
If your model is not on this list, you must register it to vLLM.
This page provides detailed instructions on how to do so.
@ -16,7 +16,7 @@ This gives you the ability to modify the codebase and test your model.
After you have implemented your model (see [tutorial](#new-model-basic)), put it into the <gh-dir:vllm/model_executor/models> directory.
Then, add your model class to `_VLLM_MODELS` in <gh-file:vllm/model_executor/models/registry.py> so that it is automatically registered upon importing vLLM.
You should also include an example HuggingFace repository for this model in <gh-file:tests/models/registry.py> to run the unit tests.
Finally, update the [Supported Models](#supported-models) documentation page to promote your model!
Finally, update our [list of supported models](#supported-models) to promote your model!
```{important}
The list of models in each section should be maintained in alphabetical order.

View File

@ -1,6 +1,6 @@
(deploying-with-docker)=
(deployment-docker)=
# Deploying with Docker
# Using Docker
## Use vLLM's Official Docker Image

View File

@ -1,6 +1,6 @@
(deploying-with-bentoml)=
(deployment-bentoml)=
# Deploying with BentoML
# BentoML
[BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-complicant image and deploy it on Kubernetes.

View File

@ -1,6 +1,6 @@
(deploying-with-cerebrium)=
(deployment-cerebrium)=
# Deploying with Cerebrium
# Cerebrium
```{raw} html
<p align="center">

View File

@ -1,6 +1,6 @@
(deploying-with-dstack)=
(deployment-dstack)=
# Deploying with dstack
# dstack
```{raw} html
<p align="center">

View File

@ -1,6 +1,6 @@
(deploying-with-helm)=
(deployment-helm)=
# Deploying with Helm
# Helm
A Helm chart to deploy vLLM for Kubernetes
@ -38,7 +38,7 @@ chart **including persistent volumes** and deletes the release.
## Architecture
```{image} architecture_helm_deployment.png
```{image} /assets/deployment/architecture_helm_deployment.png
```
## Values

View File

@ -0,0 +1,13 @@
# Using other frameworks
```{toctree}
:maxdepth: 1
bentoml
cerebrium
dstack
helm
lws
skypilot
triton
```

View File

@ -1,6 +1,6 @@
(deploying-with-lws)=
(deployment-lws)=
# Deploying with LWS
# LWS
LeaderWorkerSet (LWS) is a Kubernetes API that aims to address common deployment patterns of AI/ML inference workloads.
A major use case is for multi-host/multi-node distributed inference.

View File

@ -1,6 +1,6 @@
(on-cloud)=
(deployment-skypilot)=
# Deploying and scaling up with SkyPilot
# SkyPilot
```{raw} html
<p align="center">
@ -12,9 +12,9 @@ vLLM can be **run and scaled to multiple service replicas on clouds and Kubernet
## Prerequisites
- Go to the [HuggingFace model page](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and request access to the model {code}`meta-llama/Meta-Llama-3-8B-Instruct`.
- Go to the [HuggingFace model page](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and request access to the model `meta-llama/Meta-Llama-3-8B-Instruct`.
- Check that you have installed SkyPilot ([docs](https://skypilot.readthedocs.io/en/latest/getting-started/installation.html)).
- Check that {code}`sky check` shows clouds or Kubernetes are enabled.
- Check that `sky check` shows clouds or Kubernetes are enabled.
```console
pip install skypilot-nightly

View File

@ -1,5 +1,5 @@
(deploying-with-triton)=
(deployment-triton)=
# Deploying with NVIDIA Triton
# NVIDIA Triton
The [Triton Inference Server](https://github.com/triton-inference-server) hosts a tutorial demonstrating how to quickly deploy a simple [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) model using vLLM. Please see [Deploying a vLLM model in Triton](https://github.com/triton-inference-server/tutorials/blob/main/Quick_Deploy/vLLM/README.md#deploying-a-vllm-model-in-triton) for more details.

View File

@ -0,0 +1,9 @@
# External Integrations
```{toctree}
:maxdepth: 1
kserve
kubeai
llamastack
```

View File

@ -1,6 +1,6 @@
(deploying-with-kserve)=
(deployment-kserve)=
# Deploying with KServe
# KServe
vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving.

View File

@ -1,6 +1,6 @@
(deploying-with-kubeai)=
(deployment-kubeai)=
# Deploying with KubeAI
# KubeAI
[KubeAI](https://github.com/substratusai/kubeai) is a Kubernetes operator that enables you to deploy and manage AI models on Kubernetes. It provides a simple and scalable way to deploy vLLM in production. Functionality such as scale-from-zero, load based autoscaling, model caching, and much more is provided out of the box with zero external dependencies.

View File

@ -1,6 +1,6 @@
(run-on-llamastack)=
(deployment-llamastack)=
# Serving with Llama Stack
# Llama Stack
vLLM is also available via [Llama Stack](https://github.com/meta-llama/llama-stack) .

View File

@ -1,6 +1,6 @@
(deploying-with-k8s)=
(deployment-k8s)=
# Deploying with Kubernetes
# Using Kubernetes
Using Kubernetes to deploy vLLM is a scalable and efficient way to serve machine learning models. This guide will walk you through the process of deploying vLLM with Kubernetes, including the necessary prerequisites, steps for deployment, and testing.

View File

@ -1,6 +1,6 @@
(nginxloadbalancer)=
# Deploying with Nginx Loadbalancer
# Using Nginx
This document shows how to launch multiple vLLM serving containers and use Nginx to act as a load balancer between the servers.

View File

@ -57,7 +57,7 @@ More API details can be found in the {doc}`Offline Inference
The code for the `LLM` class can be found in <gh-file:vllm/entrypoints/llm.py>.
### OpenAI-compatible API server
### OpenAI-Compatible API Server
The second primary interface to vLLM is via its OpenAI-compatible API server.
This server can be started using the `vllm serve` command.

View File

@ -1,8 +1,12 @@
(disagg-prefill)=
# Disaggregated prefilling (experimental)
# Disaggregated Prefilling (experimental)
This page introduces you the disaggregated prefilling feature in vLLM. This feature is experimental and subject to change.
This page introduces you the disaggregated prefilling feature in vLLM.
```{note}
This feature is experimental and subject to change.
```
## Why disaggregated prefilling?

View File

@ -1,6 +1,6 @@
(spec-decode)=
# Speculative decoding
# Speculative Decoding
```{warning}
Please note that speculative decoding in vLLM is not yet optimized and does

View File

@ -148,7 +148,7 @@ $ export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
$ python3 setup.py develop
```
This may take 5-10 minutes. Currently, {code}`pip install .` does not work for ROCm installation.
This may take 5-10 minutes. Currently, `pip install .` does not work for ROCm installation.
```{tip}
- Triton flash attention is used by default. For benchmarking purposes, it is recommended to run a warm up step before collecting perf numbers.

View File

@ -82,7 +82,7 @@ $ python setup.py develop
## Supported Features
- [Offline batched inference](#offline-batched-inference)
- [Offline inference](#offline-inference)
- Online inference via [OpenAI-Compatible Server](#openai-compatible-server)
- HPU autodetection - no need to manually select device within vLLM
- Paged KV cache with algorithms enabled for Intel Gaudi accelerators

View File

@ -2,20 +2,20 @@
# Quickstart
This guide will help you quickly get started with vLLM to:
This guide will help you quickly get started with vLLM to perform:
- [Run offline batched inference](#offline-batched-inference)
- [Run OpenAI-compatible inference](#openai-compatible-server)
- [Offline batched inference](#quickstart-offline)
- [Online inference using OpenAI-compatible server](#quickstart-online)
## Prerequisites
- OS: Linux
- Python: 3.9 -- 3.12
- GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.)
## Installation
You can install vLLM using pip. It's recommended to use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage Python environments.
If you are using NVIDIA GPUs, you can install vLLM using [pip](https://pypi.org/project/vllm/) directly.
It's recommended to use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage Python environments.
```console
$ conda create -n myenv python=3.10 -y
@ -23,9 +23,11 @@ $ conda activate myenv
$ pip install vllm
```
Please refer to the [installation documentation](#installation-index) for more details on installing vLLM.
```{note}
For non-CUDA platforms, please refer [here](#installation-index) for specific instructions on how to install vLLM.
```
(offline-batched-inference)=
(quickstart-offline)=
## Offline Batched Inference
@ -73,7 +75,7 @@ for output in outputs:
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
(openai-compatible-server)=
(quickstart-online)=
## OpenAI-Compatible Server

View File

@ -65,32 +65,14 @@ getting_started/troubleshooting
getting_started/faq
```
```{toctree}
:caption: Serving
:maxdepth: 1
serving/openai_compatible_server
serving/deploying_with_docker
serving/deploying_with_k8s
serving/deploying_with_helm
serving/deploying_with_nginx
serving/distributed_serving
serving/metrics
serving/integrations
serving/tensorizer
serving/runai_model_streamer
serving/engine_args
serving/env_vars
serving/usage_stats
```
```{toctree}
:caption: Models
:maxdepth: 1
models/supported_models
models/generative_models
models/pooling_models
models/supported_models
models/extensions/index
```
```{toctree}
@ -99,7 +81,6 @@ models/pooling_models
features/quantization/index
features/lora
features/multimodal_inputs
features/tool_calling
features/structured_outputs
features/automatic_prefix_caching
@ -108,6 +89,32 @@ features/spec_decode
features/compatibility_matrix
```
```{toctree}
:caption: Inference and Serving
:maxdepth: 1
serving/offline_inference
serving/openai_compatible_server
serving/multimodal_inputs
serving/distributed_serving
serving/metrics
serving/engine_args
serving/env_vars
serving/usage_stats
serving/integrations/index
```
```{toctree}
:caption: Deployment
:maxdepth: 1
deployment/docker
deployment/k8s
deployment/nginx
deployment/frameworks/index
deployment/integrations/index
```
```{toctree}
:caption: Performance
:maxdepth: 1

View File

@ -0,0 +1,8 @@
# Built-in Extensions
```{toctree}
:maxdepth: 1
runai_model_streamer
tensorizer
```

View File

@ -1,6 +1,6 @@
(runai-model-streamer)=
# Loading Models with Run:ai Model Streamer
# Loading models with Run:ai Model Streamer
Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory.
Further reading can be found in [Run:ai Model Streamer Documentation](https://github.com/run-ai/runai-model-streamer/blob/master/docs/README.md).

View File

@ -1,6 +1,6 @@
(tensorizer)=
# Loading Models with CoreWeave's Tensorizer
# Loading models with CoreWeave's Tensorizer
vLLM supports loading models with [CoreWeave's Tensorizer](https://docs.coreweave.com/coreweave-machine-learning-and-ai/inference/tensorizer).
vLLM model tensors that have been serialized to disk, an HTTP/HTTPS endpoint, or S3 endpoint can be deserialized

View File

@ -1,9 +1,9 @@
(supported-models)=
# Supported Models
# List of Supported Models
vLLM supports generative and pooling models across various tasks.
If a model supports more than one task, you can set the task via the {code}`--task` argument.
If a model supports more than one task, you can set the task via the `--task` argument.
For each task, we list the model architectures that have been implemented in vLLM.
Alongside each architecture, we include some popular models that use it.
@ -14,8 +14,8 @@ Alongside each architecture, we include some popular models that use it.
By default, vLLM loads models from [HuggingFace (HF) Hub](https://huggingface.co/models).
To determine whether a given model is supported, you can check the {code}`config.json` file inside the HF repository.
If the {code}`"architectures"` field contains a model architecture listed below, then it should be supported in theory.
To determine whether a given model is supported, you can check the `config.json` file inside the HF repository.
If the `"architectures"` field contains a model architecture listed below, then it should be supported in theory.
````{tip}
The easiest way to check if your model is really supported at runtime is to run the program below:
@ -48,7 +48,7 @@ To use models from [ModelScope](https://www.modelscope.cn) instead of HuggingFac
$ export VLLM_USE_MODELSCOPE=True
```
And use with {code}`trust_remote_code=True`.
And use with `trust_remote_code=True`.
```python
from vllm import LLM
@ -420,15 +420,15 @@ you should explicitly specify the task type to ensure that the model is used in
```
```{note}
{code}`ssmits/Qwen2-7B-Instruct-embed-base` has an improperly defined Sentence Transformers config.
You should manually set mean pooling by passing {code}`--override-pooler-config '{"pooling_type": "MEAN"}'`.
`ssmits/Qwen2-7B-Instruct-embed-base` has an improperly defined Sentence Transformers config.
You should manually set mean pooling by passing `--override-pooler-config '{"pooling_type": "MEAN"}'`.
```
```{note}
Unlike base Qwen2, {code}`Alibaba-NLP/gte-Qwen2-7B-instruct` uses bi-directional attention.
You can set {code}`--hf-overrides '{"is_causal": false}'` to change the attention mask accordingly.
Unlike base Qwen2, `Alibaba-NLP/gte-Qwen2-7B-instruct` uses bi-directional attention.
You can set `--hf-overrides '{"is_causal": false}'` to change the attention mask accordingly.
On the other hand, its 1.5B variant ({code}`Alibaba-NLP/gte-Qwen2-1.5B-instruct`) uses causal attention
On the other hand, its 1.5B variant (`Alibaba-NLP/gte-Qwen2-1.5B-instruct`) uses causal attention
despite being described otherwise on its model card.
```
@ -468,8 +468,8 @@ If your model is not in the above list, we will try to automatically convert the
{func}`vllm.model_executor.models.adapters.as_reward_model`. By default, we return the hidden states of each token directly.
```{important}
For process-supervised reward models such as {code}`peiyi9979/math-shepherd-mistral-7b-prm`, the pooling config should be set explicitly,
e.g.: {code}`--override-pooler-config '{"pooling_type": "STEP", "step_tag_id": 123, "returned_token_ids": [456, 789]}'`.
For process-supervised reward models such as `peiyi9979/math-shepherd-mistral-7b-prm`, the pooling config should be set explicitly,
e.g.: `--override-pooler-config '{"pooling_type": "STEP", "step_tag_id": 123, "returned_token_ids": [456, 789]}'`.
```
#### Classification (`--task classify`)
@ -537,13 +537,13 @@ The following modalities are supported depending on the model:
- **V**ideo
- **A**udio
Any combination of modalities joined by {code}`+` are supported.
Any combination of modalities joined by `+` are supported.
- e.g.: {code}`T + I` means that the model supports text-only, image-only, and text-with-image inputs.
- e.g.: `T + I` means that the model supports text-only, image-only, and text-with-image inputs.
On the other hand, modalities separated by {code}`/` are mutually exclusive.
On the other hand, modalities separated by `/` are mutually exclusive.
- e.g.: {code}`T / I` means that the model supports text-only and image-only inputs, but not text-with-image inputs.
- e.g.: `T / I` means that the model supports text-only and image-only inputs, but not text-with-image inputs.
See [this page](#multimodal-inputs) on how to pass multi-modal inputs to the model.
@ -731,8 +731,8 @@ See [this page](#generative-models) for more information on how to use generativ
<sup>+</sup> Multiple items can be inputted per text prompt for this modality.
````{important}
To enable multiple multi-modal items per text prompt, you have to set {code}`limit_mm_per_prompt` (offline inference)
or {code}`--limit-mm-per-prompt` (online inference). For example, to enable passing up to 4 images per text prompt:
To enable multiple multi-modal items per text prompt, you have to set `limit_mm_per_prompt` (offline inference)
or `--limit-mm-per-prompt` (online inference). For example, to enable passing up to 4 images per text prompt:
```python
llm = LLM(
@ -751,11 +751,11 @@ vLLM currently only supports adding LoRA to the language backbone of multimodal
```
```{note}
To use {code}`TIGER-Lab/Mantis-8B-siglip-llama3`, you have pass {code}`--hf_overrides '{"architectures": ["MantisForConditionalGeneration"]}'` when running vLLM.
To use `TIGER-Lab/Mantis-8B-siglip-llama3`, you have pass `--hf_overrides '{"architectures": ["MantisForConditionalGeneration"]}'` when running vLLM.
```
```{note}
The official {code}`openbmb/MiniCPM-V-2` doesn't work yet, so we need to use a fork ({code}`HwwwH/MiniCPM-V-2`) for now.
The official `openbmb/MiniCPM-V-2` doesn't work yet, so we need to use a fork (`HwwwH/MiniCPM-V-2`) for now.
For more details, please see: <gh-pr:4087#issuecomment-2250397630>
```
@ -770,7 +770,7 @@ you should explicitly specify the task type to ensure that the model is used in
#### Text Embedding (`--task embed`)
Any text generation model can be converted into an embedding model by passing {code}`--task embed`.
Any text generation model can be converted into an embedding model by passing `--task embed`.
```{note}
To get the best results, you should use pooling models that are specifically trained as such.
@ -818,7 +818,7 @@ At vLLM, we are committed to facilitating the integration and support of third-p
2. **Best-Effort Consistency**: While we aim to maintain a level of consistency between the models implemented in vLLM and other frameworks like transformers, complete alignment is not always feasible. Factors like acceleration techniques and the use of low-precision computations can introduce discrepancies. Our commitment is to ensure that the implemented models are functional and produce sensible results.
```{tip}
When comparing the output of {code}`model.generate` from HuggingFace Transformers with the output of {code}`llm.generate` from vLLM, note that the former reads the model's generation config file (i.e., [generation_config.json](https://github.com/huggingface/transformers/blob/19dabe96362803fb0a9ae7073d03533966598b17/src/transformers/generation/utils.py#L1945)) and applies the default parameters for generation, while the latter only uses the parameters passed to the function. Ensure all sampling parameters are identical when comparing outputs.
When comparing the output of `model.generate` from HuggingFace Transformers with the output of `llm.generate` from vLLM, note that the former reads the model's generation config file (i.e., [generation_config.json](https://github.com/huggingface/transformers/blob/19dabe96362803fb0a9ae7073d03533966598b17/src/transformers/generation/utils.py#L1945)) and applies the default parameters for generation, while the latter only uses the parameters passed to the function. Ensure all sampling parameters are identical when comparing outputs.
```
3. **Issue Resolution and Model Updates**: Users are encouraged to report any bugs or issues they encounter with third-party models. Proposed fixes should be submitted via PRs, with a clear explanation of the problem and the rationale behind the proposed solution. If a fix for one model impacts another, we rely on the community to highlight and address these cross-model dependencies. Note: for bugfix PRs, it is good etiquette to inform the original author to seek their feedback.

View File

@ -18,13 +18,13 @@ After adding enough GPUs and nodes to hold the model, you can run vLLM first, wh
There is one edge case: if the model fits in a single node with multiple GPUs, but the number of GPUs cannot divide the model size evenly, you can use pipeline parallelism, which splits the model along layers and supports uneven splits. In this case, the tensor parallel size should be 1 and the pipeline parallel size should be the number of GPUs.
```
## Details for Distributed Inference and Serving
## Running vLLM on a single node
vLLM supports distributed tensor-parallel and pipeline-parallel inference and serving. Currently, we support [Megatron-LM's tensor parallel algorithm](https://arxiv.org/pdf/1909.08053.pdf). We manage the distributed runtime with either [Ray](https://github.com/ray-project/ray) or python native multiprocessing. Multiprocessing can be used when deploying on a single node, multi-node inferencing currently requires Ray.
Multiprocessing will be used by default when not running in a Ray placement group and if there are sufficient GPUs available on the same node for the configured {code}`tensor_parallel_size`, otherwise Ray will be used. This default can be overridden via the {code}`LLM` class {code}`distributed_executor_backend` argument or {code}`--distributed-executor-backend` API server argument. Set it to {code}`mp` for multiprocessing or {code}`ray` for Ray. It's not required for Ray to be installed for the multiprocessing case.
Multiprocessing will be used by default when not running in a Ray placement group and if there are sufficient GPUs available on the same node for the configured `tensor_parallel_size`, otherwise Ray will be used. This default can be overridden via the `LLM` class `distributed_executor_backend` argument or `--distributed-executor-backend` API server argument. Set it to `mp` for multiprocessing or `ray` for Ray. It's not required for Ray to be installed for the multiprocessing case.
To run multi-GPU inference with the {code}`LLM` class, set the {code}`tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs:
To run multi-GPU inference with the `LLM` class, set the `tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs:
```python
from vllm import LLM
@ -32,14 +32,14 @@ llm = LLM("facebook/opt-13b", tensor_parallel_size=4)
output = llm.generate("San Franciso is a")
```
To run multi-GPU serving, pass in the {code}`--tensor-parallel-size` argument when starting the server. For example, to run API server on 4 GPUs:
To run multi-GPU serving, pass in the `--tensor-parallel-size` argument when starting the server. For example, to run API server on 4 GPUs:
```console
$ vllm serve facebook/opt-13b \
$ --tensor-parallel-size 4
```
You can also additionally specify {code}`--pipeline-parallel-size` to enable pipeline parallelism. For example, to run API server on 8 GPUs with pipeline parallelism and tensor parallelism:
You can also additionally specify `--pipeline-parallel-size` to enable pipeline parallelism. For example, to run API server on 8 GPUs with pipeline parallelism and tensor parallelism:
```console
$ vllm serve gpt2 \
@ -47,7 +47,7 @@ $ --tensor-parallel-size 4 \
$ --pipeline-parallel-size 2
```
## Multi-Node Inference and Serving
## Running vLLM on multiple nodes
If a single node does not have enough GPUs to hold the model, you can run the model using multiple nodes. It is important to make sure the execution environment is the same on all nodes, including the model path, the Python environment. The recommended way is to use docker images to ensure the same environment, and hide the heterogeneity of the host machines via mapping them into the same docker configuration.

View File

@ -1,17 +0,0 @@
# Integrations
```{toctree}
:maxdepth: 1
run_on_sky
deploying_with_kserve
deploying_with_kubeai
deploying_with_triton
deploying_with_bentoml
deploying_with_cerebrium
deploying_with_lws
deploying_with_dstack
serving_with_langchain
serving_with_llamaindex
serving_with_llamastack
```

View File

@ -0,0 +1,8 @@
# External Integrations
```{toctree}
:maxdepth: 1
langchain
llamaindex
```

View File

@ -1,10 +1,10 @@
(run-on-langchain)=
(serving-langchain)=
# Serving with Langchain
# LangChain
vLLM is also available via [Langchain](https://github.com/langchain-ai/langchain) .
vLLM is also available via [LangChain](https://github.com/langchain-ai/langchain) .
To install langchain, run
To install LangChain, run
```console
$ pip install langchain langchain_community -q

View File

@ -1,10 +1,10 @@
(run-on-llamaindex)=
(serving-llamaindex)=
# Serving with llama_index
# LlamaIndex
vLLM is also available via [llama_index](https://github.com/run-llama/llama_index) .
vLLM is also available via [LlamaIndex](https://github.com/run-llama/llama_index) .
To install llamaindex, run
To install LlamaIndex, run
```console
$ pip install llama-index-llms-vllm -q

View File

@ -4,7 +4,7 @@ vLLM exposes a number of metrics that can be used to monitor the health of the
system. These metrics are exposed via the `/metrics` endpoint on the vLLM
OpenAI compatible API server.
You can start the server using Python, or using [Docker](deploying_with_docker.md):
You can start the server using Python, or using [Docker](#deployment-docker):
```console
$ vllm serve unsloth/Llama-3.2-1B-Instruct

View File

@ -18,7 +18,7 @@ To input multi-modal data, follow this schema in {class}`vllm.inputs.PromptType`
### Image
You can pass a single image to the {code}`'image'` field of the multi-modal dictionary, as shown in the following examples:
You can pass a single image to the `'image'` field of the multi-modal dictionary, as shown in the following examples:
```python
llm = LLM(model="llava-hf/llava-1.5-7b-hf")
@ -122,21 +122,21 @@ for o in outputs:
### Video
You can pass a list of NumPy arrays directly to the {code}`'video'` field of the multi-modal dictionary
You can pass a list of NumPy arrays directly to the `'video'` field of the multi-modal dictionary
instead of using multi-image input.
Full example: <gh-file:examples/offline_inference_vision_language.py>
### Audio
You can pass a tuple {code}`(array, sampling_rate)` to the {code}`'audio'` field of the multi-modal dictionary.
You can pass a tuple `(array, sampling_rate)` to the `'audio'` field of the multi-modal dictionary.
Full example: <gh-file:examples/offline_inference_audio_language.py>
### Embedding
To input pre-computed embeddings belonging to a data type (i.e. image, video, or audio) directly to the language model,
pass a tensor of shape {code}`(num_items, feature_size, hidden_size of LM)` to the corresponding field of the multi-modal dictionary.
pass a tensor of shape `(num_items, feature_size, hidden_size of LM)` to the corresponding field of the multi-modal dictionary.
```python
# Inference with image embeddings as input
@ -294,7 +294,7 @@ $ export VLLM_IMAGE_FETCH_TIMEOUT=<timeout>
### Video
Instead of {code}`image_url`, you can pass a video file via {code}`video_url`. Here is a simple example using [LLaVA-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf).
Instead of `image_url`, you can pass a video file via `video_url`. Here is a simple example using [LLaVA-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf).
First, launch the OpenAI-compatible server:
@ -418,7 +418,7 @@ result = chat_completion_from_base64.choices[0].message.content
print("Chat completion output from input audio:", result)
```
Alternatively, you can pass {code}`audio_url`, which is the audio counterpart of {code}`image_url` for image input:
Alternatively, you can pass `audio_url`, which is the audio counterpart of `image_url` for image input:
```python
chat_completion_from_url = client.chat.completions.create(

View File

@ -0,0 +1,79 @@
(offline-inference)=
# Offline Inference
You can run vLLM in your own code on a list of prompts.
The offline API is based on the {class}`~vllm.LLM` class.
To initialize the vLLM engine, create a new instance of `LLM` and specify the model to run.
For example, the following code downloads the [`facebook/opt-125m`](https://huggingface.co/facebook/opt-125m) model from HuggingFace
and runs it in vLLM using the default configuration.
```python
llm = LLM(model="facebook/opt-125m")
```
After initializing the `LLM` instance, you can perform model inference using various APIs.
The available APIs depend on the type of model that is being run:
- [Generative models](#generative-models) output logprobs which are sampled from to obtain the final output text.
- [Pooling models](#pooling-models) output their hidden states directly.
Please refer to the above pages for more details about each API.
```{seealso}
[API Reference](/dev/offline_inference/offline_index)
```
## Configuration Options
This section lists the most common options for running the vLLM engine.
For a full list, refer to the [Engine Arguments](#engine-args) page.
### Reducing memory usage
Large models might cause your machine to run out of memory (OOM). Here are some options that help alleviate this problem.
#### Tensor Parallelism (TP)
Tensor parallelism (`tensor_parallel_size` option) can be used to split the model across multiple GPUs.
The following code splits the model across 2 GPUs.
```python
llm = LLM(model="ibm-granite/granite-3.1-8b-instruct",
tensor_parallel_size=2)
```
```{important}
To ensure that vLLM initializes CUDA correctly, you should avoid calling related functions (e.g. {func}`torch.cuda.set_device`)
before initializing vLLM. Otherwise, you may run into an error like `RuntimeError: Cannot re-initialize CUDA in forked subprocess`.
To control which devices are used, please instead set the `CUDA_VISIBLE_DEVICES` environment variable.
```
#### Quantization
Quantized models take less memory at the cost of lower precision.
Statically quantized models can be downloaded from HF Hub (some popular ones are available at [Neural Magic](https://huggingface.co/neuralmagic))
and used directly without extra configuration.
Dynamic quantization is also supported via the `quantization` option -- see [here](#quantization-index) for more details.
#### Context length and batch size
You can further reduce memory usage by limit the context length of the model (`max_model_len` option)
and the maximum batch size (`max_num_seqs` option).
```python
llm = LLM(model="adept/fuyu-8b",
max_model_len=2048,
max_num_seqs=2)
```
### Performance optimization and tuning
You can potentially improve the performance of vLLM by finetuning various options.
Please refer to [this guide](#optimization-and-tuning) for more details.

View File

@ -1,8 +1,10 @@
# OpenAI Compatible Server
(openai-compatible-server)=
vLLM provides an HTTP server that implements OpenAI's [Completions](https://platform.openai.com/docs/api-reference/completions) and [Chat](https://platform.openai.com/docs/api-reference/chat) API, and more!
# OpenAI-Compatible Server
You can start the server via the [`vllm serve`](#vllm-serve) command, or through [Docker](deploying_with_docker.md):
vLLM provides an HTTP server that implements OpenAI's [Completions API](https://platform.openai.com/docs/api-reference/completions), [Chat API](https://platform.openai.com/docs/api-reference/chat), and more!
You can start the server via the [`vllm serve`](#vllm-serve) command, or through [Docker](#deployment-docker):
```bash
vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123
```

View File

@ -45,7 +45,7 @@ You can preview the collected data by running the following command:
tail ~/.config/vllm/usage_stats.json
```
## Opt-out of Usage Stats Collection
## Opting out
You can opt-out of usage stats collection by setting the `VLLM_NO_USAGE_STATS` or `DO_NOT_TRACK` environment variable, or by creating a `~/.config/vllm/do_not_track` file: