This website requires JavaScript.
Explore
Help
Register
Sign In
20231088
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
vllm
/
examples
History
Zhuohan Li
9d9072a069
Implement prompt logprobs & Batched topk for computing logprobs (
#1328
)
...
Co-authored-by: Yunmo Chen <16273544+wanmok@users.noreply.github.com>
2023-10-16 10:56:50 -07:00
..
api_client.py
[Quality] Add code formatter and linter (
#326
)
2023-07-03 11:31:55 -07:00
gradio_webserver.py
API server support ipv4 / ipv6 dualstack (
#1288
)
2023-10-07 15:15:54 -07:00
llm_engine_example.py
Implement prompt logprobs & Batched topk for computing logprobs (
#1328
)
2023-10-16 10:56:50 -07:00
offline_inference.py
[Quality] Add code formatter and linter (
#326
)
2023-07-03 11:31:55 -07:00
openai_chatcompletion_client.py
[Fix] Add chat completion Example and simplify dependencies (
#576
)
2023-07-25 23:45:48 -07:00
openai_completion_client.py
[Fix] Add chat completion Example and simplify dependencies (
#576
)
2023-07-25 23:45:48 -07:00