2025-01-06 21:40:31 +08:00
(quantization-supported-hardware)=
2024-12-23 17:35:38 -05:00
2025-01-06 21:40:31 +08:00
# Supported Hardware
2024-12-23 17:35:38 -05:00
The table below shows the compatibility of various quantization implementations with different hardware platforms in vLLM:
2025-01-29 03:38:29 +00:00
:::{list-table}
2024-12-29 15:56:22 +08:00
:header-rows: 1
:widths: 20 8 8 8 8 8 8 8 8 8 8
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
- * Implementation
* Volta
* Turing
* Ampere
* Ada
* Hopper
* AMD GPU
* Intel GPU
* x86 CPU
* AWS Inferentia
* Google TPU
- * AWQ
2025-02-18 10:52:39 +00:00
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * GPTQ
* ✅︎
* ✅︎
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * Marlin (GPTQ/AWQ/FP8)
2025-02-18 10:52:39 +00:00
* ❌
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
* ❌
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * INT8 (W8A8)
2025-02-18 10:52:39 +00:00
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * FP8 (W8A8)
2025-02-18 10:52:39 +00:00
* ❌
* ❌
* ❌
2025-01-29 03:38:29 +00:00
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * AQLM
* ✅︎
* ✅︎
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
* ❌
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * bitsandbytes
* ✅︎
* ✅︎
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
* ❌
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * DeepSpeedFP
* ✅︎
* ✅︎
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
* ❌
* ❌
* ❌
2025-01-29 03:38:29 +00:00
- * GGUF
* ✅︎
* ✅︎
* ✅︎
* ✅︎
* ✅︎
* ✅︎
2025-02-18 10:52:39 +00:00
* ❌
* ❌
* ❌
* ❌
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
- Volta refers to SM 7.0, Turing to SM 7.5, Ampere to SM 8.0/8.6, Ada to SM 8.9, and Hopper to SM 9.0.
2025-02-18 10:52:39 +00:00
- ✅︎ indicates that the quantization method is supported on the specified hardware.
- ❌ indicates that the quantization method is not supported on the specified hardware.
2024-12-23 17:35:38 -05:00
2025-01-29 03:38:29 +00:00
:::{note}
2025-01-06 21:40:31 +08:00
This compatibility chart is subject to change as vLLM continues to evolve and expand its support for different hardware platforms and quantization methods.
2024-12-23 17:35:38 -05:00
2024-12-26 06:49:26 +08:00
For the most up-to-date information on hardware support and quantization methods, please refer to < gh-dir:vllm / model_executor / layers / quantization > or consult with the vLLM development team.
2025-01-29 03:38:29 +00:00
:::