Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
LLaMA-Factory-Llama3.2_pytorch
Commits
78b09731
Commit
78b09731
authored
Nov 06, 2024
by
chenzk
Browse files
v1.0.4
parent
670bcfcb
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
README.md
README.md
+2
-2
whl/vllm-0.6.2+das.opt1.ac9aae1.dtk24042-cp310-cp310-linux_x86_64.whl
....2+das.opt1.ac9aae1.dtk24042-cp310-cp310-linux_x86_64.whl
+0
-0
No files found.
README.md
View file @
78b09731
...
@@ -125,11 +125,11 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
...
@@ -125,11 +125,11 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
# 方法二:vllm 推理
# 方法二:vllm 推理
# 先安装新版vllm
# 先安装新版vllm
pip install whl/vllm-0.6.2+das.opt1.
85def94
.dtk24042-cp310-cp310-linux_x86_64.whl
pip install whl/vllm-0.6.2+das.opt1.
ac9aae1
.dtk24042-cp310-cp310-linux_x86_64.whl
pip install whl/flash_attn-2.6.1+das.opt2.08f8827.dtk24042-cp310-cp310-linux_x86_64.whl
pip install whl/flash_attn-2.6.1+das.opt2.08f8827.dtk24042-cp310-cp310-linux_x86_64.whl
export LM_NN=0
# 推理
# 推理
python infer_vllm.py # 后期可从光合开发者社区下载性能优化更好的vllm推理。
python infer_vllm.py # 后期可从光合开发者社区下载性能优化更好的vllm推理。
# 若无法成功调用vllm,在终端输入命令:export LM_NN=0
```
```
## result
## result
...
...
whl/vllm-0.6.2+das.opt1.
85def94
.dtk24042-cp310-cp310-linux_x86_64.whl
→
whl/vllm-0.6.2+das.opt1.
ac9aae1
.dtk24042-cp310-cp310-linux_x86_64.whl
View file @
78b09731
No preview for this file type
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment