Commit 63aaaabc authored by chenych's avatar chenych
Browse files

Update to vllm-0.9.2

parent 2d237d09
......@@ -19,11 +19,10 @@ Qwen3嵌入模型系列是Qwen3家族最新的专有模型,专门为文本嵌
### Docker(方法一)
```bash
docker pull docker pull image.sourcefind.cn:5000/dcu/admin/base/vllm:0.8.5-ubuntu22.04-dtk25.04.1-rc5-das1.6-py3.10-20250711
docker pull image.sourcefind.cn:5000/dcu/admin/base/vllm:0.9.2-ubuntu22.04-dtk25.04.1-rc5-rocblas101839-0811-das1.6-py3.10-20250812-beta
docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash
cd /your_code_path/qwen3-reranker_pytorch
pip install transformers>=4.51.0
```
### Dockerfile(方法二)
......@@ -33,7 +32,6 @@ docker build --no-cache -t qwen3-reranker:latest .
docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash
cd /your_code_path/qwen3-reranker_pytorch
pip install transformers>=4.51.0
```
### Anaconda(方法三)
......@@ -41,9 +39,9 @@ pip install transformers>=4.51.0
```bash
DTK: 25.04
python: 3.10
vllm: 0.8.5
torch: 2.4.1+das.opt2.dtk2504
deepspeed: 0.14.2+das.opt2.dtk2504
vllm: 0.9.2+das.opt1.beta.dtk25041
torch: 2.5.1+das.opt1.dtk25041
deepspeed: 0.14.2+das.opt1.dtk25041
```
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应`
......@@ -60,17 +58,34 @@ pip install transformers>=4.51.0
## 推理
### vllm推理方法
vllm 0.8.5不支持serve模式启动推理,offline方式请参考项目脚本`infer_vllm.py`
#### offline
```bash
## 必须添加HF_ENDPOINT环境变量
export HF_ENDPOINT=https://hf-mirror.com
export VLLM_USE_NN=0
export ALLREDUCE_STREAM_WITH_COMPUTE=1
## model_name_or_path 模型地址参数
python infer_vllm.py --model_name_or_path /path/your_model_path/
```
#### serve
export HF_ENDPOINT=https://hf-mirror.com
export VLLM_USE_NN=0
export ALLREDUCE_STREAM_WITH_COMPUTE=1
```bash
vllm serve Qwen/Qwen3-Reranker-0.6B --max-model-len 4096 --trust-remote-code --enforce-eager --enable-prefix-caching --served-model-name Qwen3-reranker --task score --disable-log-requests --hf_overrides '{"architectures":["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}'
```
测试命令:
```bash
curl http://127.0.0.1:8000/score -H 'accept: application/json' -H 'Content-Type: application/json' -d '{
"text_1": "ping",
"text_2": "pong",
"model": "Qwen3-reranker"
}'
```
## result
<div align=center>
<img src="./doc/results-dcu.png"/>
......
FROM image.sourcefind.cn:5000/dcu/admin/base/custom:vllm0.8.5-ubuntu22.04-dtk25.04-rc7-das1.5-py3.10-20250612-fixpy-rocblas0611-rc2
\ No newline at end of file
FROM image.sourcefind.cn:5000/dcu/admin/base/vllm:0.9.2-ubuntu22.04-dtk25.04.1-rc5-rocblas101839-0811-das1.6-py3.10-20250812-beta
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment