298 B

(deployment-modal)=

Modal

vLLM can be run on cloud GPUs with Modal, a serverless computing platform designed for fast auto-scaling.

For details on how to deploy vLLM on Modal, see this tutorial in the Modal documentation.