Michael Goin 01a55941f5
[Docs] Update FP8 KV Cache documentation (#12238)
Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
2025-01-23 11:18:09 +08:00

285 B

(quantization-index)=

Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

:caption: Contents
:maxdepth: 1

supported_hardware
auto_awq
bnb
gguf
int8
fp8
quantized_kvcache