README.md 960 Bytes
Newer Older
1
2
3
4
## Run evaluation

### Evaluate sglang

5
6
Host the VLM:

7
```
8
python -m sglang.launch_server --model-path Qwen/Qwen2-VL-7B-Instruct --port 30000
9
10
```

11
12
It's recommended to reduce the memory usage by appending something like `--mem-fraction-static 0.6` to the command above.

13
14
Benchmark:

15
```
16
python benchmark/mmmu/bench_sglang.py --port 30000 --concurrency 16
17
18
```

19
You can adjust the `--concurrency` to control the number of concurrent OpenAI calls.
20

21
22
23
24
25
26
27
28
29
You can use `--lora-path` to specify the LoRA adapter to apply during benchmarking. E.g.,
```
# Launch server with LoRA enabled
python -m sglang.launch_server --model-path microsoft/Phi-4-multimodal-instruct --port 30000 --trust-remote-code --disable-radix-cache --lora-paths vision=<LoRA path>

# Apply LoRA adapter during inferencing
python -m benchmark/mmmu/bench_sglang.py --concurrency 8 --lora-path vision
```

30
31
32
33
34
### Evaluate hf

```
python benchmark/mmmu/bench_hf.py --model-path Qwen/Qwen2-VL-7B-Instruct
```