This website requires JavaScript.
Explore
Help
Register
Sign In
20231088
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
vllm
/
docs
/
source
/
quantization
History
Jee Jee Li
15859f2357
[[Misc]Upgrade bitsandbytes to the latest version 0.45.0 (
#11201
)
2024-12-15 03:03:06 +00:00
..
auto_awq.rst
[Doc] fix the autoAWQ example (
#7937
)
2024-08-28 12:12:32 +00:00
bnb.rst
[[Misc]Upgrade bitsandbytes to the latest version 0.45.0 (
#11201
)
2024-12-15 03:03:06 +00:00
fp8_e4m3_kvcache.rst
[Core/Bugfix] Add FP8 K/V Scale and dtype conversion for prefix/prefill Triton Kernel (
#7208
)
2024-08-12 22:47:41 +00:00
fp8_e5m2_kvcache.rst
Super tiny little typo fix (
#10633
)
2024-11-25 13:08:30 +00:00
fp8.rst
[Doc] Installed version of llmcompressor for int8/fp8 quantization (
#11103
)
2024-12-11 15:43:24 +00:00
gguf.rst
[Doc] Add documentation for GGUF quantization (
#8618
)
2024-09-19 13:15:55 -06:00
int8.rst
[Doc] Installed version of llmcompressor for int8/fp8 quantization (
#11103
)
2024-12-11 15:43:24 +00:00
supported_hardware.rst
[Hardware][XPU] AWQ/GPTQ support for xpu backend (
#10107
)
2024-11-18 11:18:05 -07:00