133 Commits

Author SHA1 Message Date
Woosuk Kwon
89988ec8c2
Add Apache-2.0 license (#102) 2023-05-14 18:05:19 -07:00
Woosuk Kwon
6208d622ca
Minor code cleaning for SamplingParams (#99) 2023-05-12 18:07:09 -07:00
Woosuk Kwon
42f1042e1c
Enhance SamplingParams (#96) 2023-05-11 15:45:30 -07:00
Woosuk Kwon
55f8b0a5de
Implement presence and frequency penalties (#95) 2023-05-10 23:39:12 -07:00
Woosuk Kwon
9f88db35da
Support top-k sampling (#94) 2023-05-10 12:51:36 -07:00
Woosuk Kwon
ae356774ab
Avoid sorting waiting queue & Minor code cleaning (#93) 2023-05-10 01:57:07 -07:00
Woosuk Kwon
e331957784
Log system stats (#90) 2023-05-10 01:06:53 -07:00
Woosuk Kwon
8d66a7b6d7
Rename variables and methods (#91) 2023-05-10 00:58:31 -07:00
Woosuk Kwon
ce26e57fd3
Update sample prompts in simple_server.py (#89) 2023-05-09 16:47:39 -07:00
Woosuk Kwon
85eb631839
Use slow tokenizer for LLaMA (#84) 2023-05-09 16:03:44 -07:00
Woosuk Kwon
add055e151
Enhance model loader (#83) 2023-05-09 15:46:42 -07:00
Woosuk Kwon
7c041ab578
Refactor system architecture (#82) 2023-05-09 15:30:12 -07:00
Woosuk Kwon
8917782af6
Add a system logger (#85) 2023-05-08 23:03:35 -07:00
Woosuk Kwon
7addca5935
Specify python package dependencies in requirements.txt (#78) 2023-05-07 16:30:43 -07:00
Woosuk Kwon
c84e924287
[Minor] Fix a dtype bug (#79) 2023-05-06 02:12:12 -07:00
Woosuk Kwon
c9d5b6d4a8
Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
Woosuk Kwon
189ae23133
Use dtype from model config & Add Dolly V2 (#63) 2023-05-04 03:05:37 -07:00
Woosuk Kwon
e548c1488a
Add support for GPT-2 (#60) 2023-05-04 02:59:56 -07:00
Woosuk Kwon
130d5fd8c7
Fix a bug in attention kernel (#68) 2023-05-04 02:56:09 -07:00
Woosuk Kwon
e070829ae8
Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
Woosuk Kwon
436e523bf1
Refactor attention kernels (#53) 2023-05-03 13:40:13 -07:00
Zhuohan Li
27f1410d06
New weight loader without np copy (#52) 2023-05-03 15:32:04 +08:00
Zhuohan Li
4858f3bb45
Add an option to launch cacheflow without ray (#51) 2023-04-30 15:42:17 +08:00
Woosuk Kwon
a96d63c21d
Add support for GPT-NeoX (Pythia) (#50) 2023-04-28 00:32:10 -07:00
Woosuk Kwon
aa50b17ca7 Change plotting script 2023-04-17 04:49:14 +00:00
Woosuk Kwon
0f4b32199e
Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
Woosuk Kwon
84eee24e20
Collect system stats in scheduler & Add scripts for experiments (#30) 2023-04-12 15:03:49 -07:00
Siyuan (Ryans) Zhuang
e3cec88aa5
Memcpy kernel for flash attention (#29)
* optimize

* add benchmark

* add assert

* add test
2023-04-10 18:22:49 -07:00
Woosuk Kwon
b9926f7f66
Support block size 32 (#35) 2023-04-09 23:07:18 -07:00
Woosuk Kwon
ee88a7e5f3
Add an option to use dummy model weights (#33) 2023-04-08 23:36:12 -07:00
Woosuk Kwon
c267b1a02c
Add query stride to multi_query_cached_kv_attention & Add kernel benchmark script (#27)
* Add query stride to multi_query_cached_kv_attention

* Add kernel benchmark script
2023-04-08 13:36:09 -07:00
Woosuk Kwon
0f40557af6
Implement block copy kernel to optimize beam search (#32) 2023-04-07 17:45:07 -07:00
Zhuohan Li
a490aafa36
Fix potential bugs in FastAPI frontend and add comments (#28) 2023-04-06 13:44:24 +08:00
Woosuk Kwon
12659a0bd7
Add CUDA graph-based all reduce launcher (#26) 2023-04-05 11:16:57 -07:00
Siyuan (Ryans) Zhuang
21b3671bbc
Basic attention kernel that supports cached KV + (multi-)prompts (#24) 2023-04-04 20:34:46 -07:00
Woosuk Kwon
897cb2ae28
Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
Zhuohan Li
1f01a18d39
Merge QKV into one linear layer (#15) 2023-04-02 00:23:29 -07:00
Woosuk Kwon
2c5cd0defe
Add ninja to dependency (#21) 2023-04-01 19:00:20 -07:00
Woosuk Kwon
a90c97d727
Use FP32 for log probabilities (#19) 2023-03-31 23:33:43 -07:00
Zhuohan Li
e3f00d191e
Modify README to include info on loading LLaMA (#18) 2023-04-01 01:07:57 +08:00
Woosuk Kwon
09e9245478
Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
Zhuohan Li
c45f3c3ab6
Optimize tensor parallel execution speed (#17) 2023-04-01 00:51:08 +08:00
Woosuk Kwon
7a7929abe8
Implement preemption via recomputation & Refactor scheduling logic (#12) 2023-03-30 14:51:46 -07:00
Woosuk Kwon
88c0268a18
Implement custom kernel for LLaMA rotary embedding (#14) 2023-03-30 11:04:21 -07:00
Woosuk Kwon
80a2f812f1
Implement LLaMA (#9)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-03-30 12:25:32 +08:00
Woosuk Kwon
a1b3de86cd
Refactor the test code for attention kernels (#13) 2023-03-29 18:59:27 -07:00
Woosuk Kwon
64e0e38314
Add cache watermark to avoid frequent cache eviction (#11) 2023-03-29 16:38:48 -07:00
Zhuohan Li
721fa3df15
FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
Woosuk Kwon
d359cda5fa Minor 2023-03-26 08:00:39 +00:00
Zhuohan Li
2f49f15585
Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00