2023-06-17 03:07:40 -07:00
Welcome to vLLM!
================
2023-05-22 17:02:44 -07:00
2023-06-19 16:31:13 +08:00
.. figure :: ./assets/logos/vllm-logo-text-light.png
:width: 60%
:align: center
:alt: vLLM
:class: no-scaled-link
.. raw :: html
<p style="text-align:center">
<strong>Easy, fast, and cheap LLM serving for everyone
</strong>
</p>
<p style="text-align:center">
2023-06-19 20:35:38 -07:00
<script async defer src="https://buttons.github.io/buttons.js"></script>
<a class="github-button" href="https://github.com/vllm-project/vllm" data-show-count="true" data-size="large" aria-label="Star">Star</a>
<a class="github-button" href="https://github.com/vllm-project/vllm/subscription" data-icon="octicon-eye" data-size="large" aria-label="Watch">Watch</a>
<a class="github-button" href="https://github.com/vllm-project/vllm/fork" data-icon="octicon-repo-forked" data-size="large" aria-label="Fork">Fork</a>
2023-06-19 16:31:13 +08:00
</p>
2023-06-19 19:58:23 -07:00
vLLM is a fast and easy-to-use library for LLM inference and serving.
2023-06-19 16:31:13 +08:00
vLLM is fast with:
* State-of-the-art serving throughput
* Efficient management of attention key and value memory with **PagedAttention**
2023-06-26 11:34:23 -07:00
* Continuous batching of incoming requests
2023-12-17 01:49:20 -08:00
* Fast model execution with CUDA/HIP graph
2024-01-31 00:07:07 -08:00
* Quantization: `GPTQ <https://arxiv.org/abs/2210.17323> `_ , `AWQ <https://arxiv.org/abs/2306.00978> `_ , `SqueezeLLM <https://arxiv.org/abs/2306.07629> `_ , FP8 KV Cache
2023-06-19 16:31:13 +08:00
* Optimized CUDA kernels
vLLM is flexible and easy to use with:
* Seamless integration with popular HuggingFace models
* High-throughput serving with various decoding algorithms, including *parallel sampling* , *beam search* , and more
2024-07-12 17:48:23 -07:00
* Tensor parallelism and pipeline parallelism support for distributed inference
2023-06-19 16:31:13 +08:00
* Streaming outputs
* OpenAI-compatible API server
2023-12-17 01:49:20 -08:00
* Support NVIDIA GPUs and AMD GPUs
2024-01-31 00:07:07 -08:00
* (Experimental) Prefix caching support
* (Experimental) Multi-lora support
2023-06-18 03:19:38 -07:00
2023-06-26 11:34:23 -07:00
For more information, check out the following:
* `vLLM announcing blog post <https://vllm.ai> `_ (intro to PagedAttention)
2023-09-13 17:38:13 -07:00
* `vLLM paper <https://arxiv.org/abs/2309.06180> `_ (SOSP 2023)
2023-06-26 11:34:23 -07:00
* `How continuous batching enables 23x throughput in LLM inference while reducing p50 latency <https://www.anyscale.com/blog/continuous-batching-llm-inference> `_ by Cade Daniel et al.
2024-05-13 18:48:00 -07:00
* :ref: `vLLM Meetups <meetups>` .
2023-06-26 11:34:23 -07:00
2023-06-18 03:19:38 -07:00
2023-06-18 01:26:12 +08:00
2023-05-22 17:02:44 -07:00
Documentation
-------------
.. toctree ::
:maxdepth: 1
:caption: Getting Started
getting_started/installation
2023-12-08 15:16:52 +08:00
getting_started/amd-installation
2024-06-28 17:50:16 +04:00
getting_started/openvino-installation
2024-04-02 13:07:30 +08:00
getting_started/cpu-installation
2024-06-12 11:53:03 -07:00
getting_started/neuron-installation
getting_started/tpu-installation
2024-06-18 02:01:25 +08:00
getting_started/xpu-installation
2023-05-22 17:02:44 -07:00
getting_started/quickstart
2024-06-10 23:21:43 -07:00
getting_started/debugging
2024-04-22 17:36:54 +01:00
getting_started/examples/examples_index
2023-06-02 22:35:17 -07:00
2023-06-26 11:34:23 -07:00
.. toctree ::
:maxdepth: 1
:caption: Serving
2024-03-18 22:05:34 -07:00
serving/openai_compatible_server
2023-10-31 12:36:47 -07:00
serving/deploying_with_docker
2024-03-18 22:05:34 -07:00
serving/distributed_serving
2023-12-02 16:37:44 -08:00
serving/metrics
2024-05-02 22:13:49 -07:00
serving/env_vars
2024-03-28 22:16:12 -07:00
serving/usage_stats
2024-03-18 22:05:34 -07:00
serving/integrations
2024-06-14 14:27:57 -04:00
serving/tensorizer
2024-07-01 14:11:36 -07:00
serving/faq
2023-06-26 11:34:23 -07:00
2023-06-02 22:35:17 -07:00
.. toctree ::
:maxdepth: 1
:caption: Models
models/supported_models
models/adding_model
2024-07-06 17:18:59 +08:00
models/enabling_multimodal_inputs
2023-11-22 21:31:27 +01:00
models/engine_args
2024-02-12 08:24:45 -08:00
models/lora
2024-06-03 13:56:41 +08:00
models/vlm
2024-06-11 10:15:40 -07:00
models/spec_decode
2024-05-04 16:18:00 +09:00
models/performance
2023-11-05 06:43:39 +01:00
.. toctree ::
:maxdepth: 1
:caption: Quantization
2024-06-21 12:44:29 -04:00
quantization/supported_hardware
2024-01-12 11:26:49 +08:00
quantization/auto_awq
2024-06-10 20:55:12 -04:00
quantization/fp8
2024-04-03 16:15:55 -05:00
quantization/fp8_e5m2_kvcache
quantization/fp8_e4m3_kvcache
2024-01-12 11:26:49 +08:00
.. toctree ::
2024-06-03 13:56:41 +08:00
:maxdepth: 1
2024-06-11 10:24:59 -07:00
:caption: Automatic Prefix Caching
automatic_prefix_caching/apc
automatic_prefix_caching/details
.. toctree ::
2024-07-06 17:18:59 +08:00
:maxdepth: 2
2024-01-12 11:26:49 +08:00
:caption: Developer Documentation
2024-06-11 10:24:59 -07:00
2024-05-29 04:29:31 +08:00
dev/sampling_params
dev/offline_inference/offline_index
2024-01-12 11:26:49 +08:00
dev/engine/engine_index
2024-03-04 09:23:34 -08:00
dev/kernel/paged_attention
2024-06-28 20:09:56 +08:00
dev/input_processing/model_inputs_index
2024-06-03 13:56:41 +08:00
dev/multimodal/multimodal_index
2024-04-30 10:41:59 -07:00
dev/dockerfile/dockerfile
2024-01-12 11:26:49 +08:00
2024-05-13 18:48:00 -07:00
.. toctree ::
2024-06-03 13:56:41 +08:00
:maxdepth: 1
2024-05-13 18:48:00 -07:00
:caption: Community
community/meetups
2024-05-21 00:17:25 -07:00
community/sponsors
2024-05-13 18:48:00 -07:00
2024-01-12 11:26:49 +08:00
Indices and tables
==================
* :ref: `genindex`
* :ref: `modindex`