2025-01-07 11:20:01 +08:00
|
|
|
(deployment-kserve)=
|
2024-12-23 17:35:38 -05:00
|
|
|
|
2025-01-07 11:20:01 +08:00
|
|
|
# KServe
|
2024-12-23 17:35:38 -05:00
|
|
|
|
|
|
|
vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving.
|
|
|
|
|
|
|
|
Please see [this guide](https://kserve.github.io/website/latest/modelserving/v1beta1/llm/huggingface/) for more details on using vLLM with KServe.
|