Commit 3911f6be authored by xiabo's avatar xiabo
Browse files

Update README.md

parent 256b9064
......@@ -60,7 +60,7 @@ cd .. && python3 setup.py install
# <model_name> 模型的名字 ('llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'llama2', 'qwen-7b', 'qwen-14b')
# <model_path> 模型路径
# <model_format> 模型的格式 ('llama', 'hf', 'qwen')
# <tokenizer_path> tokenizer模型的路径(默认None,会去model_path里面找qwen.tiktoken
# <tokenizer_path> tokenizer模型的路径(默认None,会去model_path里面找tokenizer.model
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment