This website requires JavaScript.
Explore
Help
Register
Sign In
20231088
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
quantization
History
jon-chuang
50b8d08dbd
[Misc/Testing] Use
torch.testing.assert_close
(
#7324
)
2024-08-16 04:24:04 +00:00
..
__init__.py
[CI/Build] Move
test_utils.py
to
tests/utils.py
(
#4425
)
2024-05-13 23:50:09 +09:00
test_bitsandbytes.py
[bitsandbytes]: support read bnb pre-quantized model (
#5753
)
2024-07-23 23:45:09 +00:00
test_compressed_tensors.py
[Misc] Revert
compressed-tensors
code reuse (
#7521
)
2024-08-14 15:07:37 -07:00
test_configs.py
[Kernel][Core] Add AWQ support to the Marlin kernel (
#6612
)
2024-07-21 19:41:42 -04:00
test_cpu_offload.py
[CI] Move quantization cpu offload tests out of fastcheck (
#7574
)
2024-08-15 21:16:20 -07:00
test_fp8.py
[Misc/Testing] Use
torch.testing.assert_close
(
#7324
)
2024-08-16 04:24:04 +00:00
test_lm_head.py
[Core] Support loading GGUF model (
#5191
)
2024-08-05 17:54:23 -06:00
utils.py
[hardware][misc] introduce platform abstraction (
#6080
)
2024-07-02 20:12:22 -07:00