2025-01-12 03:17:13 -05:00
# Welcome to vLLM
2024-12-23 17:35:38 -05:00
```{figure} ./assets/logos/vllm-logo-text-light.png
:align: center
:alt: vLLM
:class: no-scaled-link
:width: 60%
```
```{raw} html
< p style = "text-align:center" >
< strong > Easy, fast, and cheap LLM serving for everyone
< / strong >
< / p >
< p style = "text-align:center" >
< script async defer src = "https://buttons.github.io/buttons.js" > < / script >
< a class = "github-button" href = "https://github.com/vllm-project/vllm" data-show-count = "true" data-size = "large" aria-label = "Star" > Star< / a >
< a class = "github-button" href = "https://github.com/vllm-project/vllm/subscription" data-icon = "octicon-eye" data-size = "large" aria-label = "Watch" > Watch< / a >
< a class = "github-button" href = "https://github.com/vllm-project/vllm/fork" data-icon = "octicon-repo-forked" data-size = "large" aria-label = "Fork" > Fork< / a >
< / p >
```
vLLM is a fast and easy-to-use library for LLM inference and serving.
2025-01-13 17:24:36 -08:00
Originally developed in the [Sky Computing Lab ](https://sky.cs.berkeley.edu ) at UC Berkeley, vLLM has evloved into a community-driven project with contributions from both academia and industry.
2024-12-23 17:35:38 -05:00
vLLM is fast with:
- State-of-the-art serving throughput
2025-01-10 11:10:12 +08:00
- Efficient management of attention key and value memory with [**PagedAttention** ](https://blog.vllm.ai/2023/06/20/vllm.html )
2024-12-23 17:35:38 -05:00
- Continuous batching of incoming requests
- Fast model execution with CUDA/HIP graph
- Quantization: [GPTQ ](https://arxiv.org/abs/2210.17323 ), [AWQ ](https://arxiv.org/abs/2306.00978 ), INT4, INT8, and FP8
- Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
- Speculative decoding
- Chunked prefill
vLLM is flexible and easy to use with:
- Seamless integration with popular HuggingFace models
- High-throughput serving with various decoding algorithms, including *parallel sampling* , *beam search* , and more
- Tensor parallelism and pipeline parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, Gaudi® accelerators and GPUs, PowerPC CPUs, TPU, and AWS Trainium and Inferentia Accelerators.
- Prefix caching support
- Multi-lora support
For more information, check out the following:
- [vLLM announcing blog post ](https://vllm.ai ) (intro to PagedAttention)
- [vLLM paper ](https://arxiv.org/abs/2309.06180 ) (SOSP 2023)
- [How continuous batching enables 23x throughput in LLM inference while reducing p50 latency ](https://www.anyscale.com/blog/continuous-batching-llm-inference ) by Cade Daniel et al.
2025-01-06 10:18:33 +08:00
- [vLLM Meetups ](#meetups )
2024-12-23 17:35:38 -05:00
## Documentation
2025-01-10 11:10:12 +08:00
% How to start using vLLM?
2024-12-23 17:35:38 -05:00
```{toctree}
:caption: Getting Started
:maxdepth: 1
2025-01-06 10:18:33 +08:00
getting_started/installation/index
2024-12-23 17:35:38 -05:00
getting_started/quickstart
getting_started/examples/examples_index
2025-01-06 10:18:33 +08:00
getting_started/troubleshooting
getting_started/faq
2024-12-23 17:35:38 -05:00
```
2025-01-10 11:10:12 +08:00
% What does vLLM support?
2024-12-23 17:35:38 -05:00
```{toctree}
:caption: Models
:maxdepth: 1
models/generative_models
models/pooling_models
2025-01-07 11:20:01 +08:00
models/supported_models
models/extensions/index
2024-12-23 17:35:38 -05:00
```
2025-01-10 11:10:12 +08:00
% Additional capabilities
2024-12-23 17:35:38 -05:00
```{toctree}
2025-01-06 21:40:31 +08:00
:caption: Features
2024-12-23 17:35:38 -05:00
:maxdepth: 1
2025-01-06 21:40:31 +08:00
features/quantization/index
features/lora
features/tool_calling
features/structured_outputs
features/automatic_prefix_caching
features/disagg_prefill
features/spec_decode
features/compatibility_matrix
2024-12-23 17:35:38 -05:00
```
2025-01-10 11:10:12 +08:00
% Details about running vLLM
2024-12-23 17:35:38 -05:00
```{toctree}
2025-01-07 11:20:01 +08:00
:caption: Inference and Serving
:maxdepth: 1
serving/offline_inference
serving/openai_compatible_server
serving/multimodal_inputs
serving/distributed_serving
serving/metrics
serving/engine_args
serving/env_vars
serving/usage_stats
serving/integrations/index
```
2025-01-10 11:10:12 +08:00
% Scaling up vLLM for production
2025-01-07 11:20:01 +08:00
```{toctree}
:caption: Deployment
:maxdepth: 1
deployment/docker
deployment/k8s
deployment/nginx
deployment/frameworks/index
deployment/integrations/index
```
2025-01-10 11:10:12 +08:00
% Making the most out of vLLM
2025-01-07 11:20:01 +08:00
```{toctree}
2024-12-23 17:35:38 -05:00
:caption: Performance
:maxdepth: 1
2025-01-06 21:40:31 +08:00
performance/optimization
2024-12-23 17:35:38 -05:00
performance/benchmarks
```
2025-01-10 11:10:12 +08:00
% Explanation of vLLM internals
2024-12-23 17:35:38 -05:00
```{toctree}
2025-01-06 21:40:31 +08:00
:caption: Design Documents
2024-12-23 17:35:38 -05:00
:maxdepth: 2
design/arch_overview
design/huggingface_integration
design/plugin_system
design/kernel/paged_attention
2025-01-10 22:30:25 +08:00
design/mm_processing
2025-01-06 21:40:31 +08:00
design/automatic_prefix_caching
2024-12-23 17:35:38 -05:00
design/multiprocessing
```
2025-01-10 11:10:12 +08:00
% How to contribute to the vLLM project
2024-12-23 17:35:38 -05:00
```{toctree}
2025-01-06 21:40:31 +08:00
:caption: Developer Guide
2024-12-23 17:35:38 -05:00
:maxdepth: 2
contributing/overview
contributing/profiling/profiling_index
contributing/dockerfile/dockerfile
2025-01-06 21:40:31 +08:00
contributing/model/index
2025-01-07 01:57:32 -05:00
contributing/vulnerability_management
2024-12-23 17:35:38 -05:00
```
2025-01-10 11:10:12 +08:00
% Technical API specifications
```{toctree}
:caption: API Reference
:maxdepth: 2
api/offline_inference/index
api/engine/index
api/inference_params
api/multimodal/index
api/model/index
```
% Latest news and acknowledgements
```{toctree}
:caption: Community
:maxdepth: 1
community/meetups
community/sponsors
```
2025-01-12 03:17:13 -05:00
## Indices and tables
2024-12-23 17:35:38 -05:00
- {ref}`genindex`
- {ref}`modindex`