Commit b7d2607e authored by huangwb's avatar huangwb
Browse files

format readme

parent e6288ec7
......@@ -105,7 +105,7 @@ python run.py configs/lmdeploy/eval_llama2_lmdeploy.py
run所有的opencompass测试都需要先开启TGI的服务,TGI服务的环境及使用参考:[https://developer.hpccube.com/codes/OpenDAS/text-generation-inference](https://developer.hpccube.com/codes/OpenDAS/text-generation-inference)
**评测base模型**
**评测base模型**
启动服务example:
```shell
HIP_VISIBLE_DEVICES=3 text-generation-launcher --dtype=float16 --model-id /data/models/Llama-2-7b-chat-hf --port 3001
......@@ -114,7 +114,7 @@ HIP_VISIBLE_DEVICES=3 text-generation-launcher --dtype=float16 --model-id /data/
```shell
python run.py configs/tgi/eval_llama2_tgi.py --debug
```
**评测chat模型**
**评测chat模型**
评测chat模型需要在模型路径里的`tokenizer_config.json`配置文件中提供`chat_template`,不同模型的`chat_template`可以参考[https://github.com/chujiezheng/chat_templates/tree/main/chat_templates](https://github.com/chujiezheng/chat_templates/tree/main/chat_templates)
具体操作的时候可以从模型路径下copy一份`tokenizer_config.json`到其他目录下,比如copy后的文件为`tokenizer_config_llama2_7b_chat.json`,在该文件里添加`chat_template`,然后在起服务的时候指定`--tokenizer-config-path`参数为修改后的文件。以llama为列,修改后的config例子见[tokenizer_config_llama2_7b_chat.json](./configs/tgi/tokenizer_config_llama2_7b_chat.json)
......@@ -123,6 +123,7 @@ python run.py configs/tgi/eval_llama2_tgi.py --debug
HIP_VISIBLE_DEVICES=3 text-generation-launcher --dtype=float16 --model-id /data/models/Llama-2-7b-chat-hf --port 3001 --tokenizer-config-path /path/to/tokenizer_config_llama2_7b_chat.json
```
注意:和base模型比多了`--tokenizer-config-path`参数。
运行评测example:
```shell
python run.py configs/tgi/eval_llama2_7b_chat_tgi.py.py --debug
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment