Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Qwen3-Reranker
Commits
5c8c2c12
Commit
5c8c2c12
authored
May 11, 2026
by
chenych
Browse files
Update vllm to 0,11
parent
1802b9a6
Changes
6
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
78 additions
and
78 deletions
+78
-78
Contributors.md
Contributors.md
+0
-2
README.md
README.md
+63
-68
doc/results-dcu-offline.png
doc/results-dcu-offline.png
+0
-0
doc/results-dcu.png
doc/results-dcu.png
+0
-0
infer_vllm.py
infer_vllm.py
+7
-4
model.properties
model.properties
+8
-4
No files found.
Contributors.md
deleted
100644 → 0
View file @
1802b9a6
# Contributors
None
\ No newline at end of file
README.md
View file @
5c8c2c12
...
...
@@ -2,112 +2,107 @@
## 论文
[
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models
](
https://arxiv.org/abs/2506.05176
)
## 模型结构
采用双编码器和交叉编码器架构。
<div
align=
center
>
<img
src=
"./doc/model.png"
/>
</div>
## 算法原理
Qwen3嵌入模型系列是Qwen3家族最新的专有模型,专门为文本嵌入和排序任务而设计。此系列继承了其基础模型出色的多语言能力、长文本理解和推理技能。Qwen3 嵌入系列在文本检索、代码检索、文本分类、文本聚类和双语文本挖掘等多种文本嵌入和排序任务中取得了显著进展。
## 模型简介
Qwen3嵌入模型系列是Qwen3家族最新的专有模型,专门为文本嵌入和排序任务而设计此系列。继承了其基础模型出色的多语言能力、长文本理解和推理技能。Qwen3 嵌入系列在文本检索、代码检索、文本分类、文本聚类和双语文本挖掘等多种文本嵌入和排序任务中取得了显著进展。
<div
align=
center
>
<img
src=
"./doc/methods.png"
/>
</div>
## 环境配置
`-v 路径`
、
`docker_name`
和
`imageID`
根据实际情况修改
### Docker(方法一)
```
bash
docker pull image.sourcefind.cn:5000/dcu/admin/base/vllm:0.9.2-ubuntu22.04-dtk25.04.1-rc5-rocblas101839-0811-das1.6-py3.10-20250812-beta
docker run
-it
--shm-size
200g
--network
=
host
--name
{
docker_name
}
--privileged
--device
=
/dev/kfd
--device
=
/dev/dri
--device
=
/dev/mkfd
--group-add
video
--cap-add
=
SYS_PTRACE
--security-opt
seccomp
=
unconfined
-u
root
-v
/path/your_code_data/:/path/your_code_data/
-v
/opt/hyhal/:/opt/hyhal/:ro
{
imageID
}
bash
## 环境依赖
| 软件 | 版本 |
| :------: | :------: |
| DTK | 26.04 |
| Python | 3.10.12 |
| Transformers | 4.57.6 |
| Torch | 2.5.1+das.opt1.dtk2604.20260206.ga29664ea |
| vLLM | 0.11.0+das.opt1.rc4.dtk2604.20260305.g49a30c70 |
cd
/your_code_path/qwen3-reranker_pytorch
```
推荐使用镜像:harbor.sourcefind.cn:5443/dcu/admin/base/vllm:0.11.0-ubuntu22.04-dtk26.04-py3.10
### Dockerfile(方法二)
```
bash
cd
docker
docker build
--no-cache
-t
qwen3-reranker:latest
.
docker run
-it
--shm-size
200g
--network
=
host
--name
{
docker_name
}
--privileged
--device
=
/dev/kfd
--device
=
/dev/dri
--device
=
/dev/mkfd
--group-add
video
--cap-add
=
SYS_PTRACE
--security-opt
seccomp
=
unconfined
-u
root
-v
/path/your_code_data/:/path/your_code_data/
-v
/opt/hyhal/:/opt/hyhal/:ro
{
imageID
}
bash
cd
/your_code_path/qwen3-reranker_pytorch
docker run
-it
\
--shm-size
256g
\
--network
=
host
\
--name
qwen3-reranker
\
--privileged
\
--device
=
/dev/kfd
\
--device
=
/dev/dri
\
--device
=
/dev/mkfd
\
--group-add
video
\
--cap-add
=
SYS_PTRACE
\
--security-opt
seccomp
=
unconfined
\
-u
root
\
-v
/opt/hyhal/:/opt/hyhal/:ro
\
-v
/path/your_code_data/:/path/your_code_data/
\
harbor.sourcefind.cn:5443/dcu/admin/base/vllm:0.11.0-ubuntu22.04-dtk26.04-py3.10 bash
```
### Anaconda(方法三)
关于本项目DCU显卡所需的特殊深度学习库可从
[
光合
](
https://developer.sourcefind.cn/tool/
)
开发者社区下载安装。
```
bash
DTK: 25.04.1
python: 3.10
vllm: 0.9.2+das.opt1.beta.dtk25041
torch: 2.5.1+das.opt1.dtk25041
deepspeed: 0.14.2+das.opt1.dtk25041
```
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应`
## 预训练权重
**请根据`支持的DCU型号`选择对应模型下载,FP8模型仅在BW1100/BW1101上支持,其他型号请勿使用!**
其它非深度学习库安装方式如下:
```
bash
pip
install
transformers>
=
4.51.0
```
| 模型名称 | 权重大小 | 数据类型 | 支持的DCU型号 | 最低卡数需求 |下载地址|
|:-----:|:----------:|:----------:|:----------:|:---------------------:|:----------:|
| Qwen3-Reranker-0.6B | 0.6B | BF16 | K100AI | 1 |
[
HuggingFace
](
https://huggingface.co/Qwen/Qwen3-Reranker-0.6B
)
|
| Qwen3-Reranker-4B | 4B | BF16 | K100AI | 1 |
[
HuggingFace
](
https://huggingface.co/Qwen/Qwen3-Reranker-4B
)
|
| Qwen3-Reranker-8B | 8B | BF16 | K100AI | 1 |
[
HuggingFace
](
https://huggingface.co/Qwen/Qwen3-Reranker-8B
)
|
## 数据集
无
`暂无`
## 训练
暂无
`
暂无
`
## 推理
### vllm推理方法
#### offline
### vLLM
#### 单机推理
##### offline
```
bash
## 必须添加HF_ENDPOINT环境变量
export
HF_ENDPOINT
=
https://hf-mirror.com
export
VLLM_USE_NN
=
0
export
ALLREDUCE_STREAM_WITH_COMPUTE
=
1
## model_name_or_path 模型地址参数
python infer_vllm.py
--model_name_or_path
/path/your_model_path/
```
#### serve
##### serve
1.
启动服务
```
bash
export
HF_ENDPOINT
=
https://hf-mirror.com
export
VLLM_USE_NN
=
0
export
ALLREDUCE_STREAM_WITH_COMPUTE
=
1
vllm serve Qwen/Qwen3-Reranker-0.6B
--max-model-len
4096
--trust-remote-code
--enforce-eager
--enable-prefix-caching
--served-model-name
Qwen3-reranker
--task
score
--disable-log-requests
--hf_overrides
'{"architectures":["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}'
vllm serve Qwen/Qwen3-Reranker-0.6B
\
--max-model-len
4096
\
--block-size
16
\
--trust-remote-code
\
--enforce-eager
\
--enable-prefix-caching
\
--served-model-name
Qwen3-reranker
\
--task
score
\
--disable-log-requests
\
--hf_overrides
'{"architectures":["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}'
```
测试命令:
2.
测试命令:
```
bash
curl http://127.0.0.1:8000/score
-H
'accept: application/json'
-H
'Content-Type: application/json'
-d
'{
"text_1": "ping",
"text_2": "pong",
"model": "Qwen3-reranker"
}'
curl http://127.0.0.1:8000/score
\
-H
'accept: application/json'
\
-H
'Content-Type: application/json'
\
-d
'{
"text_1": "ping",
"text_2": "pong",
"model": "Qwen3-reranker"
}'
```
##
result
##
效果展示
<div
align=
center
>
<img
src=
"./doc/results-dcu.png"
/>
</div>
### 精度
DCU与GPU精度一致,推理框架:vllm。
## 应用场景
### 算法类别
文本理解
### 热点应用行业
制造,广媒,家居,教育
## 预训练权重
-
[
Qwen3-Reranker-0.6B
](
https://huggingface.co/Qwen/Qwen3-Reranker-0.6B
)
-
[
Qwen3-Reranker-4B
](
https://huggingface.co/Qwen/Qwen3-Reranker-4B
)
-
[
Qwen3-Reranker-8B
](
https://huggingface.co/Qwen/Qwen3-Reranker-8B
)
`DCU与GPU精度一致,推理框架:vllm。`
## 源码仓库及问题反馈
-
https://developer.sourcefind.cn/codes/modelzoo/qwen3-reranker
_pytorch
-
https://developer.sourcefind.cn/codes/modelzoo/qwen3-reranker
## 参考资料
-
http
s
://github.com/QwenLM/
q
wen3-
reranker
-
http://github.com/QwenLM/
Q
wen3-
Embedding
doc/results-dcu-offline.png
0 → 100644
View file @
5c8c2c12
89.8 KB
doc/results-dcu.png
View replaced file @
1802b9a6
View file @
5c8c2c12
89.8 KB
|
W:
|
H:
28.5 KB
|
W:
|
H:
2-up
Swipe
Onion skin
infer_vllm.py
View file @
5c8c2c12
# Requires vllm>=0.8.5
import
logging
import
argparse
from
typing
import
Dict
,
Optional
,
List
import
json
import
logging
import
torch
import
gc
import
math
from
transformers
import
AutoTokenizer
from
vllm
import
LLM
,
SamplingParams
from
vllm.distributed.parallel_state
import
destroy_model_parallel
import
gc
import
math
from
vllm.inputs.data
import
TokensPrompt
# Arguments
parse
=
argparse
.
ArgumentParser
()
parse
.
add_argument
(
"--model_name_or_path"
,
type
=
str
,
default
=
'Qwen/Qwen3-Reranker-0.6B'
)
parse
.
add_argument
(
"--number_of_gpu"
,
type
=
int
,
default
=
4
)
parse
.
add_argument
(
"--number_of_gpu"
,
type
=
int
,
default
=
1
)
args
=
parse
.
parse_args
()
def
format_instruction
(
instruction
,
query
,
doc
):
...
...
model.properties
View file @
5c8c2c12
# 模型唯一标识
modelCode
=
1590
# 模型名称
modelName
=
q
wen3-
r
eranker
_pytorch
modelName
=
Q
wen3-
R
eranker
# 模型描述
modelDescription
=
Qwen3嵌入模型系列是Qwen3家族最新的专有模型,专门为文本嵌入和排序任务而设计。
# 应用场景
appScenario
=
推理,文本理解,制造,广媒,家居,教育
# 运行过程
processType
=
推理
# 算法类别
appCategory
=
文本理解
# 框架类型
frameType
=
pytorch
frameType
=
vllm
# 加速卡类型
accelerateType
=
K100AI
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment