Logo
Explore Help
Register Sign In
20231088/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 1 Packages Projects Releases Wiki Activity
vllm/csrc/quantization
History
Jinzhen Lin 8c32b08a86
[Kernel] Fix awq error when n is not divisable by 128 (#13227)
2025-02-13 20:07:05 -08:00
..
aqlm
[Kernel] fix types used in aqlm and ggml kernels to support dynamo (#7596)
2024-08-16 14:00:11 -07:00
awq
[Kernel] Fix awq error when n is not divisable by 128 (#13227)
2025-02-13 20:07:05 -08:00
compressed_tensors
[MISC] Replace c10::optional with std::optional (#11730)
2025-01-05 10:20:34 +09:00
cutlass_w8a8
[Kernel][Bugfix] Refactor and Fix CUTLASS 2:4 Sparse Kernels (#13198)
2025-02-14 00:01:14 +00:00
fp4
[NVIDIA] Support nvfp4 quantization (#12784)
2025-02-12 19:51:51 -08:00
fp8
[torch.compile] Dynamic fp8 + rms_norm fusion (#10906)
2024-12-13 03:19:23 +00:00
fused_kernels
[torch.compile] Dynamic fp8 + rms_norm fusion (#10906)
2024-12-13 03:19:23 +00:00
gguf
[AMD] Add support for GGUF quantization on ROCm (#10254)
2024-11-22 21:14:49 -08:00
gptq
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
2024-06-09 16:23:30 -04:00
gptq_marlin
Update pre-commit hooks (#12475)
2025-01-27 17:23:08 -07:00
machete
[CI/Build] Auto-fix Markdown files (#12941)
2025-02-08 04:25:15 -08:00
marlin
Update pre-commit hooks (#12475)
2025-01-27 17:23:08 -07:00
vectorization.cuh
[torch.compile] Dynamic fp8 + rms_norm fusion (#10906)
2024-12-13 03:19:23 +00:00
Powered by Gitea Version: 23.0.0 Page: 156ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API