flashinfer.md 511 Bytes
Newer Older
Lianmin Zheng's avatar
Lianmin Zheng committed
1
2
## Flashinfer Mode

Lianmin Zheng's avatar
Lianmin Zheng committed
3
4
[flashinfer](https://github.com/flashinfer-ai/flashinfer) is a kernel library for LLM serving.
It can be used in SGLang runtime to accelerate attention computation.
Lianmin Zheng's avatar
Lianmin Zheng committed
5
6
7

### Install flashinfer

8
See https://docs.flashinfer.ai/installation.html.
9

Lianmin Zheng's avatar
Lianmin Zheng committed
10
### Run a Server With Flashinfer Mode
Lianmin Zheng's avatar
Lianmin Zheng committed
11

Liangsheng Yin's avatar
Liangsheng Yin committed
12
Add `--enable-flashinfer` argument to enable flashinfer when launching a server.
Lianmin Zheng's avatar
Lianmin Zheng committed
13
14
15
16

Example:

```bash
Liangsheng Yin's avatar
Liangsheng Yin committed
17
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --enable-flashinfer
Lianmin Zheng's avatar
Lianmin Zheng committed
18
```