README.md 445 Bytes
Newer Older
raojy's avatar
raojy committed
1
2
3
4
5
6
7
8
9
# Using vLLM

First, vLLM must be [installed](../getting_started/installation/README.md) for your chosen device in either a Python or Docker environment.

Then, vLLM supports the following usage patterns:

- [Inference and Serving](../serving/offline_inference.md): Run a single instance of a model.
- [Deployment](../deployment/docker.md): Scale up model instances for production.
- [Training](../training/rlhf.md): Train or fine-tune a model.