vllm/docs/source/serving/deploying_with_kserve.md

8 lines
329 B
Markdown
Raw Normal View History

(deploying-with-kserve)=
# Deploying with KServe
vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving.
Please see [this guide](https://kserve.github.io/website/latest/modelserving/v1beta1/llm/huggingface/) for more details on using vLLM with KServe.