README.md 1.74 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
# vLLM
Woosuk Kwon's avatar
Woosuk Kwon committed
2

3
## Build from source
Woosuk Kwon's avatar
Woosuk Kwon committed
4
5

```bash
6
7
pip install -r requirements.txt
pip install -e .  # This may take several minutes.
Woosuk Kwon's avatar
Woosuk Kwon committed
8
9
```

10
## Test simple server
Woosuk Kwon's avatar
Woosuk Kwon committed
11
12

```bash
13
14
15
16
# Single-GPU inference.
python examples/simple_server.py # --model <your_model>

# Multi-GPU inference (e.g., 2 GPUs).
Zhuohan Li's avatar
Zhuohan Li committed
17
ray start --head
18
python examples/simple_server.py -tp 2 # --model <your_model>
19
20
21
22
```

The detailed arguments for `simple_server.py` can be found by:
```bash
23
python examples/simple_server.py --help
24
25
26
27
28
29
30
```

## FastAPI server

To start the server:
```bash
ray start --head
Woosuk Kwon's avatar
Woosuk Kwon committed
31
python -m vllm.entrypoints.fastapi_server # --model <your_model>
32
33
34
35
```

To test the server:
```bash
36
python test_cli_client.py
37
38
39
40
41
42
43
44
45
46
47
```

## Gradio web server

Install the following additional dependencies:
```bash
pip install gradio
```

Start the server:
```bash
Woosuk Kwon's avatar
Woosuk Kwon committed
48
python -m vllm.http_frontend.fastapi_frontend
49
# At another terminal
Woosuk Kwon's avatar
Woosuk Kwon committed
50
python -m vllm.http_frontend.gradio_webserver
Woosuk Kwon's avatar
Woosuk Kwon committed
51
```
52
53
54
55
56
57
58
59
60
61
62
63
64

## Load LLaMA weights

Since LLaMA weight is not fully public, we cannot directly download the LLaMA weights from huggingface. Therefore, you need to follow the following process to load the LLaMA weights.

1. Converting LLaMA weights to huggingface format with [this script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py).
    ```bash
    python src/transformers/models/llama/convert_llama_weights_to_hf.py \
        --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/llama-7b
    ```
2. For all the commands above, specify the model with `--model /output/path/llama-7b` to load the model. For example:
    ```bash
    python simple_server.py --model /output/path/llama-7b
Woosuk Kwon's avatar
Woosuk Kwon committed
65
    python -m vllm.http_frontend.fastapi_frontend --model /output/path/llama-7b
66
    ```