100 Commits

Author SHA1 Message Date
Woosuk Kwon
c84e924287
[Minor] Fix a dtype bug (#79) 2023-05-06 02:12:12 -07:00
Woosuk Kwon
c9d5b6d4a8
Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
Woosuk Kwon
189ae23133
Use dtype from model config & Add Dolly V2 (#63) 2023-05-04 03:05:37 -07:00
Woosuk Kwon
e548c1488a
Add support for GPT-2 (#60) 2023-05-04 02:59:56 -07:00
Woosuk Kwon
e070829ae8
Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
Zhuohan Li
27f1410d06
New weight loader without np copy (#52) 2023-05-03 15:32:04 +08:00
Zhuohan Li
4858f3bb45
Add an option to launch cacheflow without ray (#51) 2023-04-30 15:42:17 +08:00
Woosuk Kwon
a96d63c21d
Add support for GPT-NeoX (Pythia) (#50) 2023-04-28 00:32:10 -07:00
Woosuk Kwon
0f4b32199e
Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
Woosuk Kwon
84eee24e20
Collect system stats in scheduler & Add scripts for experiments (#30) 2023-04-12 15:03:49 -07:00
Woosuk Kwon
b9926f7f66
Support block size 32 (#35) 2023-04-09 23:07:18 -07:00
Woosuk Kwon
ee88a7e5f3
Add an option to use dummy model weights (#33) 2023-04-08 23:36:12 -07:00
Woosuk Kwon
0f40557af6
Implement block copy kernel to optimize beam search (#32) 2023-04-07 17:45:07 -07:00
Zhuohan Li
a490aafa36
Fix potential bugs in FastAPI frontend and add comments (#28) 2023-04-06 13:44:24 +08:00
Woosuk Kwon
12659a0bd7
Add CUDA graph-based all reduce launcher (#26) 2023-04-05 11:16:57 -07:00
Woosuk Kwon
897cb2ae28
Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
Zhuohan Li
1f01a18d39
Merge QKV into one linear layer (#15) 2023-04-02 00:23:29 -07:00
Woosuk Kwon
a90c97d727
Use FP32 for log probabilities (#19) 2023-03-31 23:33:43 -07:00
Woosuk Kwon
09e9245478
Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
Zhuohan Li
c45f3c3ab6
Optimize tensor parallel execution speed (#17) 2023-04-01 00:51:08 +08:00
Woosuk Kwon
7a7929abe8
Implement preemption via recomputation & Refactor scheduling logic (#12) 2023-03-30 14:51:46 -07:00
Woosuk Kwon
88c0268a18
Implement custom kernel for LLaMA rotary embedding (#14) 2023-03-30 11:04:21 -07:00
Woosuk Kwon
80a2f812f1
Implement LLaMA (#9)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-03-30 12:25:32 +08:00
Woosuk Kwon
64e0e38314
Add cache watermark to avoid frequent cache eviction (#11) 2023-03-29 16:38:48 -07:00
Zhuohan Li
721fa3df15
FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
Woosuk Kwon
d359cda5fa Minor 2023-03-26 08:00:39 +00:00
Zhuohan Li
2f49f15585
Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
Woosuk Kwon
cfae35b861
Add miscellaneous updates (#8) 2023-03-13 13:48:38 -07:00
Woosuk Kwon
e9d3f2ff77
Add memory analyzer & utomatically configure KV cache size (#6) 2023-03-11 23:23:14 -08:00
Woosuk Kwon
1a7eb7da61
Support beam search & parallel generation (#7) 2023-03-10 09:58:21 -08:00
Woosuk Kwon
04e5acc08e
Fix a bug in 1D input shape (#5) 2023-03-06 10:05:27 -08:00
Woosuk Kwon
3e9f991d6a
Use FlashAttention for multi_query_kv_attention (#4) 2023-03-01 21:13:08 -08:00
Woosuk Kwon
0deacbce6e
Implement single_query_cached_kv_attention kernel (#3) 2023-03-01 15:02:19 -08:00
Woosuk Kwon
cbf8779afa
Fix a bug in tying OPT embeddings (#1) 2023-02-24 16:29:36 -08:00
Woosuk Kwon
6aef2278f4 [Minor] Fix printing format 2023-02-24 11:56:06 +00:00
Woosuk Kwon
1132fae0ca Add Frontend 2023-02-24 11:46:43 +00:00
Woosuk Kwon
46ce1356f7 Add max_num_steps to SamplingParams 2023-02-24 11:44:40 +00:00
Woosuk Kwon
b39f149a08 Add is_finished 2023-02-24 11:44:21 +00:00
Woosuk Kwon
ef6098ec51 Merge pre_step and step 2023-02-24 10:36:08 +00:00
Woosuk Kwon
53f70e7334 Reduce the number of states in scheduler 2023-02-24 10:22:39 +00:00
Woosuk Kwon
762fd1c3fa Refactor and annotate types for attention 2023-02-24 08:58:46 +00:00
Woosuk Kwon
7f22f90e8c Remove xformers 2023-02-24 08:36:16 +00:00
Woosuk Kwon
afdbe5d373 [WIP] Add server script 2023-02-24 01:33:37 +00:00
Woosuk Kwon
932844f1cd Fix attention 2023-02-23 23:02:25 +00:00
Woosuk Kwon
ba84b8728a Fix attention 2023-02-23 22:29:46 +00:00
Woosuk Kwon
87e0bcd426 Fix attention 2023-02-23 21:32:02 +00:00
Woosuk Kwon
1ce1333573 Set default dtype to half 2023-02-23 21:31:39 +00:00
Woosuk Kwon
de0fabbc5c Fix sampler 2023-02-23 20:30:12 +00:00
Woosuk Kwon
fdd0f2f472 Minor 2023-02-23 20:23:47 +00:00
Woosuk Kwon
7f985166f7 Consider pempty tensor 2023-02-23 20:20:33 +00:00