5902 Commits

Author SHA1 Message Date
Woosuk Kwon
56b7f0efa4
Add a doc for installation (#128) 2023-05-27 01:13:06 -07:00
Woosuk Kwon
d721168449
Improve setup script & Add a guard for bfloat16 kernels (#130) 2023-05-27 00:59:32 -07:00
Woosuk Kwon
4a151dd453
Add activation registry (#126) 2023-05-25 00:09:07 -07:00
Zhuohan Li
057daef778
OpenAI Compatible Frontend (#116) 2023-05-23 21:39:50 -07:00
Woosuk Kwon
e86717833d
Incrementally decode output tokens (#121) 2023-05-23 20:46:32 -07:00
Woosuk Kwon
aedba6d5ec
Print warnings/errors for large swap space (#123) 2023-05-23 18:22:26 -07:00
Woosuk Kwon
a283ec2eec
Add contributing guideline and mypy config (#122) 2023-05-23 17:58:51 -07:00
Woosuk Kwon
3f942acfe1
Fix latency benchmark script (#118) 2023-05-22 17:03:40 -07:00
Woosuk Kwon
19d2899439
Add initial sphinx docs (#120) 2023-05-22 17:02:44 -07:00
Woosuk Kwon
655a5e48df
Introduce LLM class for offline inference (#115) 2023-05-21 17:04:18 -07:00
Woosuk Kwon
f746ced08d
Implement stop strings and best_of (#114) 2023-05-21 11:18:00 -07:00
Woosuk Kwon
c3442c1f6f
Refactor system architecture (#109) 2023-05-20 13:06:59 -07:00
Zhuohan Li
7297fa6f7c
Remove unused parts in Megatron-LM code and add copyright notice (#110) 2023-05-20 09:11:34 -06:00
Zhuohan Li
b7955ef17b
Fix timeout error in the FastAPI frontend (#34) 2023-05-19 14:00:46 -06:00
Zhuohan Li
f756799b84
Use runtime profiling to replace manual memory analyzers (#81) 2023-05-19 11:35:44 -06:00
Woosuk Kwon
825d8892b5
Use pytest format for unit tests (#107) 2023-05-17 17:11:23 -07:00
Woosuk Kwon
b322fd1607
Add docstrings to some modules and classes (#100) 2023-05-14 22:32:38 -07:00
Woosuk Kwon
667ba3995c
Add copyright headers to source files adapted from FT (#104) 2023-05-14 22:19:19 -07:00
Woosuk Kwon
707ec647bb
Add copyright headers for HF models (#103) 2023-05-14 21:54:32 -07:00
Woosuk Kwon
89988ec8c2
Add Apache-2.0 license (#102) 2023-05-14 18:05:19 -07:00
Woosuk Kwon
6208d622ca
Minor code cleaning for SamplingParams (#99) 2023-05-12 18:07:09 -07:00
Woosuk Kwon
42f1042e1c
Enhance SamplingParams (#96) 2023-05-11 15:45:30 -07:00
Woosuk Kwon
55f8b0a5de
Implement presence and frequency penalties (#95) 2023-05-10 23:39:12 -07:00
Woosuk Kwon
9f88db35da
Support top-k sampling (#94) 2023-05-10 12:51:36 -07:00
Woosuk Kwon
ae356774ab
Avoid sorting waiting queue & Minor code cleaning (#93) 2023-05-10 01:57:07 -07:00
Woosuk Kwon
e331957784
Log system stats (#90) 2023-05-10 01:06:53 -07:00
Woosuk Kwon
8d66a7b6d7
Rename variables and methods (#91) 2023-05-10 00:58:31 -07:00
Woosuk Kwon
ce26e57fd3
Update sample prompts in simple_server.py (#89) 2023-05-09 16:47:39 -07:00
Woosuk Kwon
85eb631839
Use slow tokenizer for LLaMA (#84) 2023-05-09 16:03:44 -07:00
Woosuk Kwon
add055e151
Enhance model loader (#83) 2023-05-09 15:46:42 -07:00
Woosuk Kwon
7c041ab578
Refactor system architecture (#82) 2023-05-09 15:30:12 -07:00
Woosuk Kwon
8917782af6
Add a system logger (#85) 2023-05-08 23:03:35 -07:00
Woosuk Kwon
7addca5935
Specify python package dependencies in requirements.txt (#78) 2023-05-07 16:30:43 -07:00
Woosuk Kwon
c84e924287
[Minor] Fix a dtype bug (#79) 2023-05-06 02:12:12 -07:00
Woosuk Kwon
c9d5b6d4a8
Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
Woosuk Kwon
189ae23133
Use dtype from model config & Add Dolly V2 (#63) 2023-05-04 03:05:37 -07:00
Woosuk Kwon
e548c1488a
Add support for GPT-2 (#60) 2023-05-04 02:59:56 -07:00
Woosuk Kwon
130d5fd8c7
Fix a bug in attention kernel (#68) 2023-05-04 02:56:09 -07:00
Woosuk Kwon
e070829ae8
Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
Woosuk Kwon
436e523bf1
Refactor attention kernels (#53) 2023-05-03 13:40:13 -07:00
Zhuohan Li
27f1410d06
New weight loader without np copy (#52) 2023-05-03 15:32:04 +08:00
Zhuohan Li
4858f3bb45
Add an option to launch cacheflow without ray (#51) 2023-04-30 15:42:17 +08:00
Woosuk Kwon
a96d63c21d
Add support for GPT-NeoX (Pythia) (#50) 2023-04-28 00:32:10 -07:00
Woosuk Kwon
aa50b17ca7 Change plotting script 2023-04-17 04:49:14 +00:00
Woosuk Kwon
0f4b32199e
Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
Woosuk Kwon
84eee24e20
Collect system stats in scheduler & Add scripts for experiments (#30) 2023-04-12 15:03:49 -07:00
Siyuan (Ryans) Zhuang
e3cec88aa5
Memcpy kernel for flash attention (#29)
* optimize

* add benchmark

* add assert

* add test
2023-04-10 18:22:49 -07:00
Woosuk Kwon
b9926f7f66
Support block size 32 (#35) 2023-04-09 23:07:18 -07:00
Woosuk Kwon
ee88a7e5f3
Add an option to use dummy model weights (#33) 2023-04-08 23:36:12 -07:00
Woosuk Kwon
c267b1a02c
Add query stride to multi_query_cached_kv_attention & Add kernel benchmark script (#27)
* Add query stride to multi_query_cached_kv_attention

* Add kernel benchmark script
2023-04-08 13:36:09 -07:00