2023-06-17 03:07:40 -07:00
Welcome to vLLM!
================
2023-05-22 17:02:44 -07:00
2023-06-19 16:31:13 +08:00
.. figure :: ./assets/logos/vllm-logo-text-light.png
:width: 60%
:align: center
:alt: vLLM
:class: no-scaled-link
.. raw :: html
<p style="text-align:center">
<strong>Easy, fast, and cheap LLM serving for everyone
</strong>
</p>
<p style="text-align:center">
2023-06-19 20:35:38 -07:00
<script async defer src="https://buttons.github.io/buttons.js"></script>
<a class="github-button" href="https://github.com/vllm-project/vllm" data-show-count="true" data-size="large" aria-label="Star">Star</a>
<a class="github-button" href="https://github.com/vllm-project/vllm/subscription" data-icon="octicon-eye" data-size="large" aria-label="Watch">Watch</a>
<a class="github-button" href="https://github.com/vllm-project/vllm/fork" data-icon="octicon-repo-forked" data-size="large" aria-label="Fork">Fork</a>
2023-06-19 16:31:13 +08:00
</p>
2023-06-19 19:58:23 -07:00
vLLM is a fast and easy-to-use library for LLM inference and serving.
2023-06-19 16:31:13 +08:00
vLLM is fast with:
* State-of-the-art serving throughput
* Efficient management of attention key and value memory with **PagedAttention**
2023-06-26 11:34:23 -07:00
* Continuous batching of incoming requests
2023-06-19 16:31:13 +08:00
* Optimized CUDA kernels
vLLM is flexible and easy to use with:
* Seamless integration with popular HuggingFace models
* High-throughput serving with various decoding algorithms, including *parallel sampling* , *beam search* , and more
* Tensor parallelism support for distributed inference
* Streaming outputs
* OpenAI-compatible API server
2023-06-18 03:19:38 -07:00
2023-06-26 11:34:23 -07:00
For more information, check out the following:
* `vLLM announcing blog post <https://vllm.ai> `_ (intro to PagedAttention)
* `How continuous batching enables 23x throughput in LLM inference while reducing p50 latency <https://www.anyscale.com/blog/continuous-batching-llm-inference> `_ by Cade Daniel et al.
2023-06-18 03:19:38 -07:00
2023-06-18 01:26:12 +08:00
2023-05-22 17:02:44 -07:00
Documentation
-------------
.. toctree ::
:maxdepth: 1
:caption: Getting Started
getting_started/installation
getting_started/quickstart
2023-06-02 22:35:17 -07:00
2023-06-26 11:34:23 -07:00
.. toctree ::
:maxdepth: 1
:caption: Serving
serving/distributed_serving
2023-06-02 22:35:17 -07:00
.. toctree ::
:maxdepth: 1
:caption: Models
models/supported_models
models/adding_model