diff --git a/README.md b/README.md index 4a09e3af..d9cc6d26 100644 --- a/README.md +++ b/README.md @@ -35,6 +35,7 @@ vLLM is fast with: - State-of-the-art serving throughput - Efficient management of attention key and value memory with **PagedAttention** - Continuous batching of incoming requests +- Fast model execution with CUDA/HIP graph - Quantization: [GPTQ](https://arxiv.org/abs/2210.17323), [AWQ](https://arxiv.org/abs/2306.00978), [SqueezeLLM](https://arxiv.org/abs/2306.07629) - Optimized CUDA kernels @@ -45,7 +46,7 @@ vLLM is flexible and easy to use with: - Tensor parallelism support for distributed inference - Streaming outputs - OpenAI-compatible API server -- Support NVIDIA GPUs and AMD GPUs. +- Support NVIDIA GPUs and AMD GPUs vLLM seamlessly supports many Hugging Face models, including the following architectures: diff --git a/docs/source/index.rst b/docs/source/index.rst index 46620261..816f4f7e 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -30,6 +30,7 @@ vLLM is fast with: * State-of-the-art serving throughput * Efficient management of attention key and value memory with **PagedAttention** * Continuous batching of incoming requests +* Fast model execution with CUDA/HIP graph * Quantization: `GPTQ `_, `AWQ `_, `SqueezeLLM `_ * Optimized CUDA kernels @@ -40,7 +41,7 @@ vLLM is flexible and easy to use with: * Tensor parallelism support for distributed inference * Streaming outputs * OpenAI-compatible API server -* Support NVIDIA GPUs and AMD GPUs. +* Support NVIDIA GPUs and AMD GPUs For more information, check out the following: