8 lines
298 B
Markdown
8 lines
298 B
Markdown
![]() |
(deployment-modal)=
|
||
|
|
||
|
# Modal
|
||
|
|
||
|
vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling.
|
||
|
|
||
|
For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).
|