84 Commits

Author SHA1 Message Date
Zhuohan Li
1f01a18d39
Merge QKV into one linear layer (#15) 2023-04-02 00:23:29 -07:00
Woosuk Kwon
a90c97d727
Use FP32 for log probabilities (#19) 2023-03-31 23:33:43 -07:00
Woosuk Kwon
09e9245478
Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
Zhuohan Li
c45f3c3ab6
Optimize tensor parallel execution speed (#17) 2023-04-01 00:51:08 +08:00
Woosuk Kwon
7a7929abe8
Implement preemption via recomputation & Refactor scheduling logic (#12) 2023-03-30 14:51:46 -07:00
Woosuk Kwon
88c0268a18
Implement custom kernel for LLaMA rotary embedding (#14) 2023-03-30 11:04:21 -07:00
Woosuk Kwon
80a2f812f1
Implement LLaMA (#9)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-03-30 12:25:32 +08:00
Woosuk Kwon
64e0e38314
Add cache watermark to avoid frequent cache eviction (#11) 2023-03-29 16:38:48 -07:00
Zhuohan Li
721fa3df15
FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
Woosuk Kwon
d359cda5fa Minor 2023-03-26 08:00:39 +00:00
Zhuohan Li
2f49f15585
Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
Woosuk Kwon
cfae35b861
Add miscellaneous updates (#8) 2023-03-13 13:48:38 -07:00
Woosuk Kwon
e9d3f2ff77
Add memory analyzer & utomatically configure KV cache size (#6) 2023-03-11 23:23:14 -08:00
Woosuk Kwon
1a7eb7da61
Support beam search & parallel generation (#7) 2023-03-10 09:58:21 -08:00
Woosuk Kwon
04e5acc08e
Fix a bug in 1D input shape (#5) 2023-03-06 10:05:27 -08:00
Woosuk Kwon
3e9f991d6a
Use FlashAttention for multi_query_kv_attention (#4) 2023-03-01 21:13:08 -08:00
Woosuk Kwon
0deacbce6e
Implement single_query_cached_kv_attention kernel (#3) 2023-03-01 15:02:19 -08:00
Woosuk Kwon
cbf8779afa
Fix a bug in tying OPT embeddings (#1) 2023-02-24 16:29:36 -08:00
Woosuk Kwon
6aef2278f4 [Minor] Fix printing format 2023-02-24 11:56:06 +00:00
Woosuk Kwon
1132fae0ca Add Frontend 2023-02-24 11:46:43 +00:00
Woosuk Kwon
46ce1356f7 Add max_num_steps to SamplingParams 2023-02-24 11:44:40 +00:00
Woosuk Kwon
b39f149a08 Add is_finished 2023-02-24 11:44:21 +00:00
Woosuk Kwon
ef6098ec51 Merge pre_step and step 2023-02-24 10:36:08 +00:00
Woosuk Kwon
53f70e7334 Reduce the number of states in scheduler 2023-02-24 10:22:39 +00:00
Woosuk Kwon
762fd1c3fa Refactor and annotate types for attention 2023-02-24 08:58:46 +00:00
Woosuk Kwon
7f22f90e8c Remove xformers 2023-02-24 08:36:16 +00:00
Woosuk Kwon
afdbe5d373 [WIP] Add server script 2023-02-24 01:33:37 +00:00
Woosuk Kwon
932844f1cd Fix attention 2023-02-23 23:02:25 +00:00
Woosuk Kwon
ba84b8728a Fix attention 2023-02-23 22:29:46 +00:00
Woosuk Kwon
87e0bcd426 Fix attention 2023-02-23 21:32:02 +00:00
Woosuk Kwon
1ce1333573 Set default dtype to half 2023-02-23 21:31:39 +00:00
Woosuk Kwon
de0fabbc5c Fix sampler 2023-02-23 20:30:12 +00:00
Woosuk Kwon
fdd0f2f472 Minor 2023-02-23 20:23:47 +00:00
Woosuk Kwon
7f985166f7 Consider pempty tensor 2023-02-23 20:20:33 +00:00
Woosuk Kwon
86f9eb6d39 Fix typo 2023-02-23 20:19:24 +00:00
Woosuk Kwon
1f6c7ef437 Add controller 2023-02-23 09:32:19 +00:00
Woosuk Kwon
d4bc1a4d24 Add unoptimized OPT Attention 2023-02-23 09:31:55 +00:00
Woosuk Kwon
b56b6ca0d6 Add greedy sampler 2023-02-23 09:26:09 +00:00
Woosuk Kwon
343cea3dbc Add seq_ids to input metadata 2023-02-23 09:25:01 +00:00
Woosuk Kwon
4f6f4967f6 Add get_block_table 2023-02-23 07:55:14 +00:00
Woosuk Kwon
331fa0b042 Implement scheduler.step & Add a threshold for batch size 2023-02-23 07:54:20 +00:00
Woosuk Kwon
501c4bd0cd decoding.py -> sampling_params.py 2023-02-23 07:39:20 +00:00
Woosuk Kwon
86c682cd32 DecodingParams -> SamplingParams 2023-02-23 07:38:43 +00:00
Woosuk Kwon
af16c05074 Add get_len 2023-02-23 05:58:04 +00:00
Woosuk Kwon
d094512296 Move max_context_len 2023-02-23 04:57:46 +00:00
Woosuk Kwon
4b1ac23f53 Fix slot mapping 2023-02-23 00:10:07 +00:00
Woosuk Kwon
8290fce47d Add Worker class 2023-02-22 19:01:38 +00:00
Woosuk Kwon
7b6844e590 Add input metadata 2023-02-22 19:01:20 +00:00
Woosuk Kwon
608f74ffe5 Minor 2023-02-22 18:08:25 +00:00
Woosuk Kwon
709a69176e Move worker/models -> models 2023-02-22 18:03:48 +00:00