19 lines
285 B
Markdown
19 lines
285 B
Markdown
(quantization-index)=
|
|
|
|
# Quantization
|
|
|
|
Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.
|
|
|
|
:::{toctree}
|
|
:caption: Contents
|
|
:maxdepth: 1
|
|
|
|
supported_hardware
|
|
auto_awq
|
|
bnb
|
|
gguf
|
|
int8
|
|
fp8
|
|
quantized_kvcache
|
|
:::
|