README.md 1.99 KB
Newer Older
Woosuk Kwon's avatar
Woosuk Kwon committed
1
# CacheFlow
Woosuk Kwon's avatar
Woosuk Kwon committed
2
3
4
5

## Installation

```bash
Woosuk Kwon's avatar
Woosuk Kwon committed
6
7
8
pip install psutil numpy ray torch
pip install git+https://github.com/huggingface/transformers  # Required for LLaMA.
pip install sentencepiece  # Required for LlamaTokenizer.
Woosuk Kwon's avatar
Woosuk Kwon committed
9
10
pip install ninja  # To parallelize the compilation of flash-attn.
pip install flash-attn  # This may take up to 10 mins.
Woosuk Kwon's avatar
Woosuk Kwon committed
11
12
13
pip install -e .
```

14
## Test simple server
Woosuk Kwon's avatar
Woosuk Kwon committed
15
16

```bash
Zhuohan Li's avatar
Zhuohan Li committed
17
ray start --head
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
python simple_server.py
```

The detailed arguments for `simple_server.py` can be found by:
```bash
python simple_server.py --help
```

## FastAPI server

Install the following additional dependencies:
```bash
pip install fastapi uvicorn
```

To start the server:
```bash
ray start --head
python -m cacheflow.http_frontend.fastapi_frontend
```

To test the server:
```bash
python -m cacheflow.http_frontend.test_cli_client
```

## Gradio web server

Install the following additional dependencies:
```bash
pip install gradio
```

Start the server:
```bash
python -m cacheflow.http_frontend.fastapi_frontend
# At another terminal
python -m cacheflow.http_frontend.gradio_webserver
Woosuk Kwon's avatar
Woosuk Kwon committed
56
```
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72

## Load LLaMA weights

Since LLaMA weight is not fully public, we cannot directly download the LLaMA weights from huggingface. Therefore, you need to follow the following process to load the LLaMA weights.

1. Converting LLaMA weights to huggingface format with [this script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py).
    ```bash
    python src/transformers/models/llama/convert_llama_weights_to_hf.py \
        --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/llama-7b
    ```
    Please make sure that `llama` is included in the output directory name.
2. For all the commands above, specify the model with `--model /output/path/llama-7b` to load the model. For example:
    ```bash
    python simple_server.py --model /output/path/llama-7b
    python -m cacheflow.http_frontend.fastapi_frontend --model /output/path/llama-7b
    ```