Logo
Explore Help
Register Sign In
20231088/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 1 Packages Projects Releases Wiki Activity
vllm/csrc
History
Woosuk Kwon 130d5fd8c7
Fix a bug in attention kernel (#68)
2023-05-04 02:56:09 -07:00
..
attention
Fix a bug in attention kernel (#68)
2023-05-04 02:56:09 -07:00
activation_kernels.cu
Support bfloat16 data type (#54)
2023-05-03 14:09:44 -07:00
activation.cpp
Optimize data movement (#20)
2023-04-02 00:30:17 -07:00
attention.cpp
Support various block sizes & Change default block size to 16 (#38)
2023-04-15 09:03:24 -07:00
cache_kernels.cu
Support bfloat16 data type (#54)
2023-05-03 14:09:44 -07:00
cache.cpp
Memcpy kernel for flash attention (#29)
2023-04-10 18:22:49 -07:00
layernorm_kernels.cu
Support bfloat16 data type (#54)
2023-05-03 14:09:44 -07:00
layernorm.cpp
Add custom kernel for RMS normalization (#16)
2023-04-01 00:51:22 +08:00
pos_encoding_kernels.cu
Support bfloat16 data type (#54)
2023-05-03 14:09:44 -07:00
pos_encoding.cpp
Add support for GPT-NeoX (Pythia) (#50)
2023-04-28 00:32:10 -07:00
reduction_utils.cuh
Refactor attention kernels (#53)
2023-05-03 13:40:13 -07:00
Powered by Gitea Version: 23.0.0 Page: 66ms Template: 8ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API