Commit 7c0f6618 authored by chenzk's avatar chenzk
Browse files

v1.0

parents
Pipeline #2449 canceled with stages
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B
tags:
- chat
---
# QwQ-32B
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
**This repo contains the QwQ 32B model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
**Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Usage Guidelines
To achieve optimal performance, we recommend the following settings:
1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
2. **Sampling Parameters**:
- Use Temperature=0.6 and TopP=0.95 instead of Greedy decoding to avoid endless repetitions and enhance diversity.
- For complex reasoning tasks like math or coding, set TopK=40.
- For other types of questions, use TopK=20.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
4. **Handle Long Inputs**: For inputs exceeding 32,768 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
\ No newline at end of file
# QwQ-32B
参数精简,性能不减,以1/21小参数媲美DeepSeek R1 6710亿参数的性能,成本仅1/10。
## 论文
`无`
## 模型结构
QwQ-32B采用transformer通用的Decoder-only结构。
<div align=center>
<img src="./doc/qwen.png"/>
</div>
## 算法原理
强大的基础模型+大规模强化学习=强大的推理能力,这是当前大语言模型训练的有效新方向。除了基础推理能力外,QwQ-32B还集成了与Agent相关的能力,使其能够在使用工具的同时进行批判性思考,并根据环境反馈调整推理过程。
作者暂未公布具体采用的何种强化学习算法,若为GRPO,原理如下:
算法核心点:通过反向KL散度约束,GRPO实现了更稳定的策略更新。与TRPO的硬约束不同,采用软约束形式,既能保证训练稳定性,又避免了复杂的二阶优化计算,β负责动态调节探索与利用的平衡系数。
<div align=center>
<img src="./doc/algorithm.png"/>
</div>
<div align=center>
<img src="./doc/GRPO.png"/>
</div>
GRPO算法工作流程:
<div align=center>
<img src="./doc/GRPO_flow.png"/>
</div>
## 环境配置
```
mv QwQ-32B_pytorch QwQ-32B # 去框架名后缀
```
### Docker(方法一)
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.3.0-py3.10-dtk24.04.3-ubuntu20.04
# <your IMAGE ID>为以上拉取的docker的镜像ID替换,本镜像为:b272aae8ec72
docker run -it --shm-size=64G -v $PWD/QwQ-32B:/home/QwQ-32B -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name qwq <your IMAGE ID> bash
cd /home/QwQ-32B
pip install -r requirements.txt
pip install whl/lmslim-0.1.2+das.dtk24043-cp310-cp310-linux_x86_64.whl # 安装lmslim==0.1.2
pip install whl/vllm-0.6.2+das.opt1.cd549d3.dtk24043-cp310-cp310-linux_x86_64.whl # 安装vllm==0.6.2
```
### Dockerfile(方法二)
```
cd /home/QwQ-32B/docker
docker build --no-cache -t qwq:latest .
docker run --shm-size=64G --name searcho1 -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video -v $PWD/../../QwQ-32B:/home/QwQ-32B -it qwq bash
# 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:pip install -r requirements.txt。
cd /home/QwQ-32B
pip install whl/lmslim-0.1.2+das.dtk24043-cp310-cp310-linux_x86_64.whl # 安装lmslim==0.1.2
pip install whl/vllm-0.6.2+das.opt1.cd549d3.dtk24043-cp310-cp310-linux_x86_64.whl # 安装vllm==0.6.2
```
### Anaconda(方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装:
- https://developer.hpccube.com/tool/
```
DTK驱动:dtk24.04.3
python:python3.10
torch:2.3.0
torchvision:0.18.1
torchaudio:2.1.2
triton:2.1.0
vllm:0.6.2
flash-attn:2.6.1
deepspeed:0.14.2
apex:1.3.0
xformers:0.0.25
transformers:4.48.0
```
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应。`
2、其它非特殊库参照requirements.txt安装
```
cd /home/QwQ-32B
pip install -r requirements.txt
pip install whl/lmslim-0.1.2+das.dtk24043-cp310-cp310-linux_x86_64.whl # 安装lmslim==0.1.2
pip install whl/vllm-0.6.2+das.opt1.cd549d3.dtk24043-cp310-cp310-linux_x86_64.whl # 安装vllm==0.6.2
```
## 数据集
`无`
## 训练
## 推理
预训练权重目录结构:
```
/home/QwQ-32B
└── Qwen/QwQ-32B
```
### 单机多卡
```
# 方法一:transformers推理
python infer.py
# 方法二:VLLM推理
python infer_vllm.py
```
## result
`输入: `
```
prompt: "How many r's are in the word \"strawberry\""
```
`输出:`
```
# vllm
Generated text: 'Okay, let\'s see... The user is asking how many times the letter \'r\' appears in the word "strawberry". Hmm, I need to make sure I get this right. First, I should probably write down the word and look at each letter one by one.\n\nSo, the word is S-T-R-A-W-B-E-R-R-Y. Let me break it down:\n\n1. S\n2. T\n3. R\n4. A\n5. W\n6. B\n7. E\n8. R\n9. R\n10. Y\n\nWait, let me count again to be sure. Sometimes when letters are close together, like the Rs here, it\'s easy to miscount. Starting over:\n\nFirst letter: S (no)\nSecond: T (no)\nThird: R (yes, that\'s one)\nFourth: A (no)\nFifth: W (no)\nSixth: B (no)\nSeventh: E (no)\nEighth: R (second one)\nNinth: R (third one)\nTenth: Y (no)\n\nSo, total Rs are at positions 3, 8, and 9. That makes three Rs. Wait, but sometimes people might pronounce it differently or maybe spell it differently? Let me check the spelling again. Strawberry is spelled S-T-R-A-W-B-E-R-R-Y. Yeah, that\'s correct. So there are three Rs. But hold on, maybe I missed an R somewhere else? Let me go through each letter once more slowly:\n\nS (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). Yep, that\'s three Rs. The first R is after the T, then two Rs towards the end. So the answer should be three. But I remember sometimes people might confuse it with "strawbery" without the second R, but no, the correct spelling has two Rs at the end. So yeah, three Rs in total.\n</think>\n\nThe word "strawberry" contains **3** instances of the letter \'r\'. Here\'s the breakdown:\n\n1. **R** at the 3rd position \n2. **R** at the 8th position \n3. **R** at the 9th position \n\nSo, the final count is **3 r\'s**.'
```
### 精度
DCU与GPU精度一致,推理框架:pytorch。
## 应用场景
### 算法类别
`对话问答`
### 热点应用行业
`制造,广媒,金融,能源,医疗,家居,教育`
## 预训练权重
预训练权重快速下载中心:[SCNet AIModels](http://113.200.138.88:18080/aimodels) ,项目中的预训练权重可从快速下载通道下载:[QwQ-32B](http://113.200.138.88:18080/aimodels/qwen/QwQ-32B.git)
魔搭社区下载地址为:[QwQ-32B](https://www.modelscope.cn/models/Qwen/QwQ-32B)
## 源码仓库及问题反馈
- http://developer.sourcefind.cn/codes/modelzoo/QwQ-32B_pytorch.git
## 参考资料
- https://www.modelscope.cn/models/Qwen/QwQ-32B
FROM image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.3.0-py3.10-dtk24.04.3-ubuntu20.04
ENV DEBIAN_FRONTEND=noninteractive
# RUN yum update && yum install -y git cmake wget build-essential
# RUN source /opt/dtk-24.04.3/env.sh
# # 安装pip相关依赖
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
modelscope
\ No newline at end of file
icon.png

53.8 KB

from modelscope import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
from multiprocessing import freeze_support
if __name__ == '__main__':
freeze_support()
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/QwQ-32B")
# Pass the default decoding hyperparameters of QwQ-32B
# max_tokens is for the maximum length for generation.
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512)
# Input the model name or path. Can be GPTQ or AWQ models.
llm = LLM(model="Qwen/QwQ-32B" , tensor_parallel_size=4)
# Prepare your prompts
'''
prompt = "Tell me something about large language models."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
'''
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# generate outputs
outputs = llm.generate([text], sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Generated text: {generated_text!r}")
# 模型编码
modelCode=1446
# 模型名称
modelName=QwQ-32B_pytorch
# 模型描述
modelDescription=参数精简,性能不减,以1/21小参数媲美DeepSeek R1 6710亿参数的性能,成本仅1/10。
# 应用场景
appScenario=推理,对话问答,制造,广媒,金融,能源,医疗,家居,教育
# 框架类型
frameType=pytorch
modelscope
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment