Allen.Dou
a37415c31b
allow user to chose which vllm's merics to display in grafana ( #3393 )
2024-03-14 06:35:13 +00:00
DAIZHENWEI
654865e21d
Support Mistral Model Inference with transformers-neuronx ( #3153 )
2024-03-11 13:19:51 -07:00
Sage Moore
ce4f5a29fb
Add Automatic Prefix Caching ( #2762 )
...
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
2024-03-02 00:50:01 -08:00
Liangfu Chen
3b7178cfa4
[Neuron] Support inference with transformers-neuronx ( #2569 )
2024-02-28 09:34:34 -08:00
jvmncs
8f36444c4f
multi-LoRA as extra models in OpenAI server ( #2775 )
...
how to serve the loras (mimicking the [multilora inference example](https://github.com/vllm-project/vllm/blob/main/examples/multilora_inference.py )):
```terminal
$ export LORA_PATH=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/
$ python -m vllm.entrypoints.api_server \
--model meta-llama/Llama-2-7b-hf \
--enable-lora \
--lora-modules sql-lora=$LORA_PATH sql-lora2=$LORA_PATH
```
the above server will list 3 separate values if the user queries `/models`: one for the base served model, and one each for the specified lora modules. in this case sql-lora and sql-lora2 point to the same underlying lora, but this need not be the case. lora config values take the same values they do in EngineArgs
no work has been done here to scope client permissions to specific models
2024-02-17 12:00:48 -08:00
Cheng Su
4abf6336ec
Add one example to run batch inference distributed on Ray ( #2696 )
2024-02-02 15:41:42 -08:00
Robert Shaw
93b38bea5d
Refactor Prometheus and Add Request Level Metrics ( #2316 )
2024-01-31 14:58:07 -08:00
Simon Mo
1e4277d2d1
lint: format all python file instead of just source code ( #2567 )
2024-01-23 15:53:06 -08:00
Antoni Baum
9b945daaf1
[Experimental] Add multi-LoRA support ( #1804 )
...
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
2024-01-23 15:26:37 -08:00
Jason Zhu
5d80a9178b
Minor fix in prefill cache example ( #2494 )
2024-01-18 09:40:34 -08:00
shiyi.c_98
d10f8e1d43
[Experimental] Prefix Caching Support ( #1669 )
...
Co-authored-by: DouHappy <2278958187@qq.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-17 16:32:10 -08:00
arkohut
97460585d9
Add gradio chatbot for openai webserver ( #2307 )
2024-01-11 19:45:56 -08:00
KKY
74cd5abdd1
Add baichuan chat template jinjia file ( #2390 )
2024-01-09 09:13:02 -08:00
Ronen Schaffer
1066cbd152
Remove deprecated parameter: concurrency_count ( #2315 )
2024-01-03 09:56:21 -08:00
Massimiliano Pronesti
c07a442854
chore(examples-docs): upgrade to OpenAI V1 ( #1785 )
2023-12-03 01:11:22 -08:00
Adam Brusselback
66785cc05c
Support chat template and echo
for chat API ( #1756 )
2023-11-30 16:43:13 -08:00
iongpt
ac8d36f3e5
Refactor LLMEngine demo script for clarity and modularity ( #1413 )
...
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-10-30 09:14:37 -07:00
Zhuohan Li
9d9072a069
Implement prompt logprobs & Batched topk for computing logprobs ( #1328 )
...
Co-authored-by: Yunmo Chen <16273544+wanmok@users.noreply.github.com>
2023-10-16 10:56:50 -07:00
Yunfeng Bai
09ff7f106a
API server support ipv4 / ipv6 dualstack ( #1288 )
...
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-10-07 15:15:54 -07:00
Woosuk Kwon
55fe8a81ec
Refactor scheduler ( #658 )
2023-08-02 16:42:01 -07:00
Zhuohan Li
1b0bd0fe8a
Add Falcon support (new) ( #592 )
2023-08-02 14:04:39 -07:00
Zhuohan Li
82ad323dee
[Fix] Add chat completion Example and simplify dependencies ( #576 )
2023-07-25 23:45:48 -07:00
Zhuohan Li
d6fa1be3a8
[Quality] Add code formatter and linter ( #326 )
2023-07-03 11:31:55 -07:00
Woosuk Kwon
14f0b39cda
[Bugfix] Fix a bug in RequestOutput.finished ( #202 )
2023-06-22 00:17:24 -07:00
Woosuk Kwon
0b98ba15c7
Change the name to vLLM ( #150 )
2023-06-17 03:07:40 -07:00
Zhuohan Li
e5464ee484
Rename servers to engines ( #152 )
2023-06-17 17:25:21 +08:00
Zhuohan Li
eedb46bf03
Rename servers and change port numbers to reduce confusion ( #149 )
2023-06-17 00:13:02 +08:00
Woosuk Kwon
311490a720
Add script for benchmarking serving throughput ( #145 )
2023-06-14 19:55:38 -07:00
Zhuohan Li
5020e1e80c
Non-streaming simple fastapi server ( #144 )
2023-06-10 10:43:07 -07:00
Zhuohan Li
4298374265
Add docstrings for LLMServer and related classes and examples ( #142 )
2023-06-07 18:25:20 +08:00
Woosuk Kwon
211318d44a
Add throughput benchmarking script ( #133 )
2023-05-28 03:20:05 -07:00
Zhuohan Li
057daef778
OpenAI Compatible Frontend ( #116 )
2023-05-23 21:39:50 -07:00
Woosuk Kwon
655a5e48df
Introduce LLM class for offline inference ( #115 )
2023-05-21 17:04:18 -07:00
Woosuk Kwon
f746ced08d
Implement stop strings and best_of ( #114 )
2023-05-21 11:18:00 -07:00
Woosuk Kwon
c3442c1f6f
Refactor system architecture ( #109 )
2023-05-20 13:06:59 -07:00