
Signed-off-by: yuwenzho <yuwen.zhou@intel.com> Signed-off-by: Chendi.Xue <chendi.xue@intel.com> Signed-off-by: Bob Zhu <bob.zhu@intel.com> Signed-off-by: zehao-intel <zehao.huang@intel.com> Signed-off-by: Konrad Zawora <kzawora@habana.ai> Co-authored-by: Kunshang Ji <kunshang.ji@intel.com> Co-authored-by: Sanju C Sudhakaran <scsudhakaran@habana.ai> Co-authored-by: Michal Adamczyk <madamczyk@habana.ai> Co-authored-by: Marceli Fylcek <mfylcek@habana.ai> Co-authored-by: Himangshu Lahkar <49579433+hlahkar@users.noreply.github.com> Co-authored-by: Vivek Goel <vgoel@habana.ai> Co-authored-by: yuwenzho <yuwen.zhou@intel.com> Co-authored-by: Dominika Olszewska <dolszewska@habana.ai> Co-authored-by: barak goldberg <149692267+bgoldberg-habana@users.noreply.github.com> Co-authored-by: Michal Szutenberg <37601244+szutenberg@users.noreply.github.com> Co-authored-by: Jan Kaniecki <jkaniecki@habana.ai> Co-authored-by: Agata Dobrzyniewicz <160237065+adobrzyniewicz-habana@users.noreply.github.com> Co-authored-by: Krzysztof Wisniewski <kwisniewski@habana.ai> Co-authored-by: Dudi Lester <160421192+dudilester@users.noreply.github.com> Co-authored-by: Ilia Taraban <tarabanil@gmail.com> Co-authored-by: Chendi.Xue <chendi.xue@intel.com> Co-authored-by: Michał Kuligowski <mkuligowski@habana.ai> Co-authored-by: Jakub Maksymczuk <jmaksymczuk@habana.ai> Co-authored-by: Tomasz Zielinski <85164140+tzielinski-habana@users.noreply.github.com> Co-authored-by: Sun Choi <schoi@habana.ai> Co-authored-by: Iryna Boiko <iboiko@habana.ai> Co-authored-by: Bob Zhu <41610754+czhu15@users.noreply.github.com> Co-authored-by: hlin99 <73271530+hlin99@users.noreply.github.com> Co-authored-by: Zehao Huang <zehao.huang@intel.com> Co-authored-by: Andrzej Kotłowski <Andrzej.Kotlowski@intel.com> Co-authored-by: Yan Tomsinsky <73292515+Yantom1@users.noreply.github.com> Co-authored-by: Nir David <ndavid@habana.ai> Co-authored-by: Yu-Zhou <yu.zhou@intel.com> Co-authored-by: Ruheena Suhani Shaik <rsshaik@habana.ai> Co-authored-by: Karol Damaszke <kdamaszke@habana.ai> Co-authored-by: Marcin Swiniarski <mswiniarski@habana.ai> Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Co-authored-by: Jacek Czaja <jacek.czaja@intel.com> Co-authored-by: Jacek Czaja <jczaja@habana.ai> Co-authored-by: Yuan <yuan.zhou@outlook.com>
159 lines
4.4 KiB
ReStructuredText
159 lines
4.4 KiB
ReStructuredText
Welcome to vLLM!
|
|
================
|
|
|
|
.. figure:: ./assets/logos/vllm-logo-text-light.png
|
|
:width: 60%
|
|
:align: center
|
|
:alt: vLLM
|
|
:class: no-scaled-link
|
|
|
|
.. raw:: html
|
|
|
|
<p style="text-align:center">
|
|
<strong>Easy, fast, and cheap LLM serving for everyone
|
|
</strong>
|
|
</p>
|
|
|
|
<p style="text-align:center">
|
|
<script async defer src="https://buttons.github.io/buttons.js"></script>
|
|
<a class="github-button" href="https://github.com/vllm-project/vllm" data-show-count="true" data-size="large" aria-label="Star">Star</a>
|
|
<a class="github-button" href="https://github.com/vllm-project/vllm/subscription" data-icon="octicon-eye" data-size="large" aria-label="Watch">Watch</a>
|
|
<a class="github-button" href="https://github.com/vllm-project/vllm/fork" data-icon="octicon-repo-forked" data-size="large" aria-label="Fork">Fork</a>
|
|
</p>
|
|
|
|
|
|
|
|
vLLM is a fast and easy-to-use library for LLM inference and serving.
|
|
|
|
vLLM is fast with:
|
|
|
|
* State-of-the-art serving throughput
|
|
* Efficient management of attention key and value memory with **PagedAttention**
|
|
* Continuous batching of incoming requests
|
|
* Fast model execution with CUDA/HIP graph
|
|
* Quantization: `GPTQ <https://arxiv.org/abs/2210.17323>`_, `AWQ <https://arxiv.org/abs/2306.00978>`_, INT4, INT8, and FP8
|
|
* Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
|
|
* Speculative decoding
|
|
* Chunked prefill
|
|
|
|
vLLM is flexible and easy to use with:
|
|
|
|
* Seamless integration with popular HuggingFace models
|
|
* High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
|
|
* Tensor parallelism and pipeline parallelism support for distributed inference
|
|
* Streaming outputs
|
|
* OpenAI-compatible API server
|
|
* Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, Gaudi® accelerators and GPUs, PowerPC CPUs, TPU, and AWS Trainium and Inferentia Accelerators.
|
|
* Prefix caching support
|
|
* Multi-lora support
|
|
|
|
For more information, check out the following:
|
|
|
|
* `vLLM announcing blog post <https://vllm.ai>`_ (intro to PagedAttention)
|
|
* `vLLM paper <https://arxiv.org/abs/2309.06180>`_ (SOSP 2023)
|
|
* `How continuous batching enables 23x throughput in LLM inference while reducing p50 latency <https://www.anyscale.com/blog/continuous-batching-llm-inference>`_ by Cade Daniel et al.
|
|
* :ref:`vLLM Meetups <meetups>`.
|
|
|
|
|
|
Documentation
|
|
-------------
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Getting Started
|
|
|
|
getting_started/installation
|
|
getting_started/amd-installation
|
|
getting_started/openvino-installation
|
|
getting_started/cpu-installation
|
|
getting_started/gaudi-installation
|
|
getting_started/neuron-installation
|
|
getting_started/tpu-installation
|
|
getting_started/xpu-installation
|
|
getting_started/quickstart
|
|
getting_started/debugging
|
|
getting_started/examples/examples_index
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Serving
|
|
|
|
serving/openai_compatible_server
|
|
serving/deploying_with_docker
|
|
serving/deploying_with_k8s
|
|
serving/deploying_with_nginx
|
|
serving/distributed_serving
|
|
serving/metrics
|
|
serving/env_vars
|
|
serving/usage_stats
|
|
serving/integrations
|
|
serving/tensorizer
|
|
serving/compatibility_matrix
|
|
serving/faq
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Models
|
|
|
|
models/supported_models
|
|
models/adding_model
|
|
models/enabling_multimodal_inputs
|
|
models/engine_args
|
|
models/lora
|
|
models/vlm
|
|
models/spec_decode
|
|
models/performance
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Quantization
|
|
|
|
quantization/supported_hardware
|
|
quantization/auto_awq
|
|
quantization/bnb
|
|
quantization/gguf
|
|
quantization/int8
|
|
quantization/fp8
|
|
quantization/fp8_e5m2_kvcache
|
|
quantization/fp8_e4m3_kvcache
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Automatic Prefix Caching
|
|
|
|
automatic_prefix_caching/apc
|
|
automatic_prefix_caching/details
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Performance benchmarks
|
|
|
|
performance_benchmark/benchmarks
|
|
|
|
.. toctree::
|
|
:maxdepth: 2
|
|
:caption: Developer Documentation
|
|
|
|
dev/sampling_params
|
|
dev/pooling_params
|
|
dev/offline_inference/offline_index
|
|
dev/engine/engine_index
|
|
dev/kernel/paged_attention
|
|
dev/input_processing/model_inputs_index
|
|
dev/multimodal/multimodal_index
|
|
dev/dockerfile/dockerfile
|
|
dev/profiling/profiling_index
|
|
|
|
.. toctree::
|
|
:maxdepth: 1
|
|
:caption: Community
|
|
|
|
community/meetups
|
|
community/sponsors
|
|
|
|
Indices and tables
|
|
==================
|
|
|
|
* :ref:`genindex`
|
|
* :ref:`modindex`
|