This website requires JavaScript.
Explore
Help
Register
Sign In
20231088
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
async_engine
History
Nick Hill
e2fbaee725
[BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (
#6227
)
...
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
2024-07-18 15:13:30 +08:00
..
__init__.py
[CI/Build] Move
test_utils.py
to
tests/utils.py
(
#4425
)
2024-05-13 23:50:09 +09:00
api_server_async_engine.py
[Frontend] Add FlexibleArgumentParser to support both underscore and dash in names (
#5718
)
2024-06-20 17:00:13 -06:00
test_api_server.py
[Core] Fix engine-use-ray broken (
#4105
)
2024-04-16 05:24:53 +00:00
test_async_llm_engine.py
[Core] Pipeline Parallel Support (
#4412
)
2024-07-02 10:58:08 -07:00
test_chat_template.py
[BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (
#6227
)
2024-07-18 15:13:30 +08:00
test_openapi_server_ray.py
[Doc][CI/Build] Update docs and tests to use
vllm serve
(
#6431
)
2024-07-17 07:43:21 +00:00
test_request_tracker.py
Add health check, make async Engine more robust (
#3015
)
2024-03-04 22:01:40 +00:00