modal.md 277 Bytes
Newer Older
raojy's avatar
raojy committed
1
2
3
4
5
# Modal

vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling.

For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).