This website requires JavaScript.
Explore
Help
Register
Sign In
20231088
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
attention
History
Douglas Lehr
e4a28e5316
[ROCM] Fix blockReduceSum to use correct warp counts for ROCm and CUDA (
#3262
)
2024-03-10 15:27:45 -07:00
..
attention_dtypes.h
Support FP8-E5M2 KV Cache (
#2279
)
2024-01-28 16:43:54 -08:00
attention_generic.cuh
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
attention_kernels.cu
[ROCM] Fix blockReduceSum to use correct warp counts for ROCm and CUDA (
#3262
)
2024-03-10 15:27:45 -07:00
attention_utils.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_bfloat16.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_float16.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_float32.cuh
[BugFix] Fix NaN errors in paged attention kernel (
#936
)
2023-09-04 09:20:06 +09:00
dtype_fp8_e5m2.cuh
Support FP8-E5M2 KV Cache (
#2279
)
2024-01-28 16:43:54 -08:00