README.md 642 Bytes
Newer Older
1
2
3
4
## Run evaluation

### Evaluate sglang

5
6
Host the VLM:

7
```
8
python -m sglang.launch_server --model-path Qwen/Qwen2-VL-7B-Instruct --chat-template qwen2-vl --port 30000
9
10
```

11
12
Benchmark:

13
```
14
python benchmark/mmmu/bench_sglang.py --port 30000
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
```

It's recommended to reduce the memory usage by appending something ike `--mem-fraction-static 0.6` to the command above.

### Evaluate hf

```
python benchmark/mmmu/bench_hf.py --model-path Qwen/Qwen2-VL-7B-Instruct
```

Some popular model results:

1. Qwen/Qwen2-VL-2B-Instruct: 0.241
2. Qwen/Qwen2-VL-7B-Instruct: 0.255
3. Qwen/Qwen2.5-VL-3B-Instruct: 0.245
4. Qwen/Qwen2.5-VL-7B-Instruct: 0.242