
Signed-off-by: mgoin <michael@neuralmagic.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
285 B
285 B
(quantization-index)=
Quantization
Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.
:caption: Contents
:maxdepth: 1
supported_hardware
auto_awq
bnb
gguf
int8
fp8
quantized_kvcache