Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
LLaMA_vllm
Commits
243aa730
Commit
243aa730
authored
May 14, 2024
by
zhuwenwen
Browse files
add api benchmark
parent
156319b7
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
14 additions
and
0 deletions
+14
-0
README.md
README.md
+14
-0
No files found.
README.md
View file @
243aa730
...
...
@@ -65,6 +65,7 @@ python offline_inference.py
其中,
`prompts`
为提示词;
`temperature`
为控制采样随机性的值,值越小模型生成越确定,值变高模型生成更随机,0表示贪婪采样,默认为1;
`max_tokens=16`
为生成长度,默认为1;
`model`
为模型路径;
`tensor_parallel_size=1`
为使用卡数,默认为1;
`dtype="float16"`
为推理数据类型,如果模型权重是bfloat16,需要修改为float16推理,
`quantization="gptq"`
为使用gptq量化进行推理,需下载以上GPTQ模型。
### 离线批量推理性能测试
1、指定输入输出
```
bash
...
...
@@ -84,6 +85,19 @@ python benchmark_throughput.py -num-prompts 1 --model meta-llama/Llama-2-7b-chat
其中
`--num-prompts`
是batch数,
`--model`
为模型路径,
`--dataset`
为使用的数据集,
`-tp`
为使用卡数,
`dtype="float16"`
为推理数据类型,如果模型权重是bfloat16,需要修改为float16推理。
### api服务推理性能测试
1、启动服务端:
```
bash
python
-m
vllm.entrypoints.api_server
--model
meta-llama/Llama-2-7b-chat-hf
--dtype
float16
--enforce-eager
-tp
1
```
2、启动客户端:
```
bash
python vllm/benchmarks/benchmark_serving.py
--model
meta-llama/Llama-2-7b-chat-hf
--dataset
ShareGPT_V3_unfiltered_cleaned_split.json
--num-prompts
1
--trust-remote-code
```
参数同使用数据集,离线批量推理性能测试,具体参考[vllm/benchmarks/benchmark_serving.py]
### OpenAI兼容服务
启动服务:
```
bash
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment