Qubitium-ModelCloud cd1d3c3df8
[Docs] Add GPTQModel (#14056)
Signed-off-by: mgoin <mgoin64@gmail.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
2025-03-03 21:59:09 +00:00

300 B

(quantization-index)=

Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

:::{toctree} :caption: Contents :maxdepth: 1

supported_hardware auto_awq bnb gguf gptqmodel int4 int8 fp8 quantized_kvcache :::