flashinfer.md 837 Bytes
Newer Older
Lianmin Zheng's avatar
Lianmin Zheng committed
1
2
## Flashinfer Mode

Lianmin Zheng's avatar
Lianmin Zheng committed
3
4
[flashinfer](https://github.com/flashinfer-ai/flashinfer) is a kernel library for LLM serving.
It can be used in SGLang runtime to accelerate attention computation.
Lianmin Zheng's avatar
Lianmin Zheng committed
5
6
7

### Install flashinfer

8
You can install flashinfer via pip as follows for CUDA 12.1.
Lianmin Zheng's avatar
Lianmin Zheng committed
9

Lianmin Zheng's avatar
Lianmin Zheng committed
10
```bash
11
pip install flashinfer -i https://flashinfer.ai/whl/cu121/
Lianmin Zheng's avatar
Lianmin Zheng committed
12
13
```

14
15
16
You can look for other CUDA versions in https://github.com/flashinfer-ai/flashinfer?tab=readme-ov-file#installation. If there is no desire version for your environment,
please build it from source (the compilation takes a long time).

Lianmin Zheng's avatar
Lianmin Zheng committed
17
### Run a Server With Flashinfer Mode
Lianmin Zheng's avatar
Lianmin Zheng committed
18

Lianmin Zheng's avatar
Lianmin Zheng committed
19
Add `--model-mode flashinfer` argument to enable flashinfer when launching a server.
Lianmin Zheng's avatar
Lianmin Zheng committed
20
21
22
23
24

Example:

```bash
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --model-mode flashinfer
Lianmin Zheng's avatar
Lianmin Zheng committed
25
```