Commit c0d96b32 authored by chenzk's avatar chenzk
Browse files

v1.0

parents
Pipeline #2852 failed with stages
in 0 seconds
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [OpenBMB]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
<div align="center">
<img src="./assets/minicpm_logo.png" width="500em" ></img>
</div>
<h4 align="center">
<p>
<a href="https://github.com/OpenBMB/MiniCPM/blob/main/README.md">中文</a> | <b>English</b>
<p>
</h4>
<p align="center">
<a href="https://arxiv.org/pdf/2506.07900" target="_blank">MiniCPM Paper</a> |
<a href="https://openbmb.vercel.app/" target="_blank">Technical Blog</a> |
<a href="https://modelbest.feishu.cn/wiki/D2tFw8Pcsi5CIzkaHNacLK64npg" target="_blank">MiniCPM Wiki (in Chinese)</a> |
<a href="https://github.com/OpenBMB/MiniCPM-V/" target="_blank">MiniCPM-V Repo</a> |
Join our <a href="https://discord.gg/3cGQn9b3YM" target="_blank">discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a> |
<a href="https://mp.weixin.qq.com/s/KIhH2nCURBXuFXAtYRpuXg?poc_token=HBIsUWijxino8oJ5s6HcjcfXFRi0Xj2LJlxPYD9c">Join Us</a>
</p>
## Changelog🔥
- [2025.06.06] Released [**MiniCPM4**](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b)! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips!
- [2024.09.28] **[LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce) is open source and enables MiniCPM3-4B to process text of any length.**
- [2024.09.18] **[SGLang](https://github.com/sgl-project/sglang) now supports MiniCPM3-4B. Thanks to inference optimizations made to the MLA structure (used in MiniCPM3) in SGLang v0.3, throughput has improved by 70% compared to vLLM!** [[Usage](#sglang-recommended)]
- [2024.09.16] [llama.cpp](https://github.com/ggerganov/llama.cpp/releases/tag/b3765) now officially supports MiniCPM3-4B! [[GGUF Model](https://huggingface.co/openbmb/MiniCPM3-4B-GGUF) | [Usage](#llamacpp)]
- [2024.09.05] We release [**MiniCPM3-4B**](https://huggingface.co/openbmb/MiniCPM3-4B)! This model outperforms Phi-3.5-mini-instruct and GPT-3.5-Turbo-0125 and is comparable to several models with 7B-9B parameters like Llama3.1-8B-Instruct, Qwen2-7B-Instruct, and GLM-4-9B-Chat.
- [2024.07.09] MiniCPM-2B has been supported by [SGLang](#sglang-inference)!
- [2024.07.05] Released [MiniCPM-S-1B](https://huggingface.co/openbmb/MiniCPM-S-1B-sft)! This model achieves an average sparsity of 87.89% in the FFN layer, reducing FFN FLOPs by 84%, while maintaining downstream task performance.
- [2024.04.11] Released [MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k), [MiniCPM-MoE-8x2B](https://huggingface.co/openbmb/MiniCPM-MoE-8x2B) and [MiniCPM-1B](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16)! Click [here](https://openbmb.vercel.app/) to read our technical blog.
- [2024.03.16] Intermediate checkpoints of MiniCPM-2B were released [here](https://huggingface.co/openbmb/MiniCPM-2B-history)!
- [2024.02.01] Released [**MiniCPM-2B**](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)! This model performs similarly to Mistral-7B on public benchmarks (with better performance in Chinese, math, and code abilities) and overall outperforms models like Llama2-13B, MPT-30B, and Falcon-40B.
## Quick Links
- [Changelog🔥](#changelog)
- [Quick Links](#quick-links)
- [Model Downloads](#model-downloads)
- [MiniCPM 4.0](#minicpm-40)
- [Evaluation Results](#evaluation-results)
- [Efficiency Evaluation](#efficiency-evaluation)
- [Comprehensive Evaluation](#comprehensive-evaluation)
- [Long Text Evaluation](#long-text-evaluation)
- [BitCPM4: Quantization](#bitcpm4-quantization)
- [BitCPM4 Evaluation](#bitcpm4-evaluation)
- [BitCPM4 Inference](#bitcpm4-inference)
- [MiniCPM4 Application](#minicpm4-application)
- [MiniCPM4-Survey: Trustworthy Survey Generation](#minicpm4-survey-trustworthy-survey-generation)
- [MiniCPM4-MCP: Tool Use with Model Context Pr](#minicpm4-mcp-tool-use-with-model-context-pr)
- [MiniCPM Intel AIPC Client: A New Edge Large Model Powerhouse](#minicpm-intel-aipc-client-a-new-edge-large-model-powerhouse)
- [Inference](#inference)
- [CPM.cu](#cpmcu)
- [HuggingFace](#huggingface)
- [vLLM](#vllm)
- [SGLang](#sglang)
- [MiniCPM 3.0](#minicpm-30)
- [MiniCPM 2.0](#minicpm-20)
- [MiniCPM 1.0](#minicpm-10)
- [LICENSE](#license)
- [Institutions](#institutions)
- [Citation](#citation)
## Model Downloads
| HuggingFace | ModelScope |
|-------------|------------|
| [MiniCPM4-8B](https://huggingface.co/openbmb/MiniCPM4-8B) | [MiniCPM4-8B](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B) |
| [MiniCPM4-0.5B](https://huggingface.co/openbmb/MiniCPM4-0.5B) | [MiniCPM4-0.5B](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-0.5B) |
| [BitCPM4-1B](https://huggingface.co/openbmb/BitCPM4-1B) | [BitCPM4-1B](https://www.modelscope.cn/models/OpenBMB/BitCPM4-1B) |
| [BitCPM4-0.5B](https://huggingface.co/openbmb/BitCPM4-0.5B) | [BitCPM4-0.5B](https://www.modelscope.cn/models/OpenBMB/BitCPM4-0.5B) |
| [MiniCPM4-8B-Eagle-FRSpec](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec) | [MiniCPM4-8B-Eagle-FRSpec](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-Eagle-FRSpec) |
| [MiniCPM4-8B-Eagle-FRSpec-QAT](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec-QAT) | [MiniCPM4-8B-Eagle-FRSpec-QAT](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-Eagle-FRSpec-QAT) |
| [MiniCPM4-8B-Eagle-vLLM](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-vLLM) | [MiniCPM4-8B-Eagle-vLLM](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-Eagle-vLLM) |
| [MiniCPM4-8B-marlin-Eagle-vLLM](https://huggingface.co/openbmb/MiniCPM4-8B-marlin-Eagle-vLLM) | [MiniCPM4-8B-marlin-Eagle-vLLM](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-marlin-Eagle-vLLM) |
| [MiniCPM4-Survey](https://huggingface.co/openbmb/MiniCPM4-Survey) | [MiniCPM4-Survey](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-Survey) |
| [MiniCPM4-MCP](https://huggingface.co/openbmb/MiniCPM4-MCP) | [MiniCPM4-MCP](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-MCP) |
|[MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B)|[MiniCPM3-4B](https://www.modelscope.cn/models/OpenBMB/MiniCPM3-4B)|
|[MiniCPM-2B-sft](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[MiniCPM-2B-sft](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|
|[MiniCPM-2B-dpo](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[MiniCPM-2B-dpo](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|
|[MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k) |[MiniCPM-2B-128k](https://modelscope.cn/models/openbmb/MiniCPM-2B-128k/summary)|
|[MiniCPM-MoE-8x2B](https://huggingface.co/openbmb/MiniCPM-MoE-8x2B) |[MiniCPM-MoE-8x2B](https://modelscope.cn/models/OpenBMB/MiniCPM-MoE-8x2B)|
|[MiniCPM-1B](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16) | [MiniCPM-1B](https://modelscope.cn/models/OpenBMB/MiniCPM-1B-sft-bf16) |
|[MiniCPM-S-1B](https://huggingface.co/openbmb/MiniCPM-S-1B-sft)|[MiniCPM-S-1B](https://modelscope.cn/models/OpenBMB/MiniCPM-S-1B-sft)|
Note: More model versions can be found [here](https://huggingface.co/collections/openbmb/minicpm-2b-65d48bf958302b9fd25b698f).
## MiniCPM 4.0
MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements.
- 🏗️ **Efficient Model Architecture:**
- InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts
- 🧠 **Efficient Learning Algorithms:**
- Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search
- BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction
- Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy
- 📚 **High-Quality Training Data:**
- UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset [UltraFinweb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb)
- UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data
-**Efficient Inference and Deployment System:**
- CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding.
- ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities
### Evaluation Results
#### Efficiency Evaluation
On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement.
![benchmark](./assets/minicpm4/efficiency.png)
#### Comprehensive Evaluation
MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories.
![benchmark](./assets/minicpm4/benchmark.png)
#### Long Text Evaluation
MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance.
![long-niah](./assets/minicpm4/128k-niah.png)
### BitCPM4: Quantization
BitCPM4 are ternary quantized models derived from the MiniCPM series models through quantization-aware training (QAT), achieving significant improvements in both training efficiency and model parameter efficiency.
- Improvements of the training method
- Searching hyperparameters with a wind-tunnel on a small model.
- Using a two-stage training method: training in high-precision first and then QAT, making the best of the trained high-precision models and significantly reducing the computational resources required for the QAT phase.
- High parameter efficiency
- Achieving comparable performance to full-precision models of similar parameter models with a bit width of only 1.58 bits, demonstrating high parameter efficiency.
#### BitCPM4 Evaluation
BitCPM4's performance is comparable with other full-precision models in same model size.
![bitcpm-benchmark](./assets/minicpm4/bitcpm4-benchmark.png)
#### BitCPM4 Inference
BitCPM4's parameters are stored in a fake-quantized format, which supports direct inference within the Huggingface framework.
### MiniCPM4 Application
#### MiniCPM4-Survey: Trustworthy Survey Generation
**MiniCPM4-Survey** is an open-source LLM agent model jointly developed by [THUNLP](https://nlp.csai.tsinghua.edu.cn), Renmin University of China and [ModelBest](https://modelbest.cn/en). Built on MiniCPM4-8B, it accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers.
Key features include:
- **Plan-Retrieve-Write Survey Generation Framework** — We propose a multi-agent generation framework, which operates through three core stages: planning (defining the overall structure of the survey), retrieval (generating appropriate retrieval keywords), and writing (synthesizing the retrieved information to generate coherent section-level content).
- **High-Quality Dataset Construction** — We gather and process lots of expert-written survey papers to construct a high-quality training dataset. Meanwhile, we collect a large number of research papers to build a retrieval database.
- **Multi-Aspect Reward Design** — We carefully design a reward system with three aspects (structure, content, and citations) to evaluate the quality of the surveys, which is used as the reward function in the RL training stage.
- **Multi-Step RL Training Strategy** — We propose a *Context Manager* to ensure retention of essential information while facilitating efficient reasoning, and we construct *Parallel Environment* to maintain efficient RL training cycles.
##### Demo and Quick Start
See [here](./demo/minicpm4/SurveyGeneration/README.md)
##### Performance Evaluation
| Method | Relevance | Coverage | Depth | Novelty | Avg. | Fact Score |
|---------------------------------------------|-----------|----------|-------|---------|-------|------------|
| Naive RAG (driven by G2FT) | 3.25 | 2.95 | 3.35 | 2.60 | 3.04 | 43.68 |
| AutoSurvey (driven by G2FT) | 3.10 | 3.25 | 3.15 | **3.15**| 3.16 | 46.56 |
| Webthinker (driven by WTR1-7B) | 3.30 | 3.00 | 2.75 | 2.50 | 2.89 | -- |
| Webthinker (driven by QwQ-32B) | 3.40 | 3.30 | 3.30 | 2.50 | 3.13 | -- |
| OpenAI Deep Research (driven by GPT-4o) | 3.50 |**3.95** | 3.55 | 3.00 | **3.50** | -- |
| MiniCPM-4-Survey | 3.45 | 3.70 | **3.85** | 3.00 | **3.50** | **68.73** |
| &nbsp;&nbsp;&nbsp;*w/o* RL | **3.55** | 3.35 | 3.30 | 2.25 | 3.11 | 50.24 |
*Performance comparison of the survey generation systems. "G2FT" stands for Gemini-2.0-Flash-Thinking, and "WTR1-7B" denotes Webthinker-R1-7B. FactScore evaluation was omitted for Webthinker, as it does not include citation functionality, and for OpenAI Deep Research, which does not provide citations when exporting the results.*
#### MiniCPM4-MCP: Tool Use with Model Context Protocol
**MiniCPM4-MCP** is an open-source on-device LLM agent model jointly developed by [THUNLP](https://nlp.csai.tsinghua.edu.cn), Renmin University of China and [ModelBest](https://modelbest.cn/en), built on [MiniCPM-4](https://huggingface.co/openbmb/MiniCPM4-8B) with 8 billion parameters. It is capable of solving a wide range of real-world tasks by interacting with various tool and data resources through MCP. As of now, MiniCPM4-MCP supports the following:
- Utilization of tools across 16 MCP servers: These servers span various categories, including office, lifestyle, communication, information, and work management.
- Single-tool-calling capability: It can perform single- or multi-step tool calls using a single tool that complies with the MCP.
- Cross-tool-calling capability: It can perform single- or multi-step tool calls using different tools that complies with the MCP.
##### Demo
Demo is available in this [link](./demo/minicpm4/MCP/README_en.md).
##### Performance Evaluation
| MCP Server | | gpt-4o | | | qwen3 | | | minicpm4 | |
|-----------------------|----------------|--------------|--------------|---------------|--------------|--------------|----------------|--------------|--------------|
| | func | param | value | func | param | value | func | param | value |
| Airbnb | 89.3 | 67.9 | 53.6 | 92.8 | 60.7 | 50.0 | 96.4 | 67.9 | 50.0 |
| Amap-Maps | 79.8 | 77.5 | 50.0 | 74.4 | 72.0 | 41.0 | 89.3 | 85.7 | 39.9 |
| Arxiv-MCP-Server | 85.7 | 85.7 | 85.7 | 81.8 | 54.5 | 50.0 | 57.1 | 57.1 | 52.4 |
| Calculator | 100.0 | 100.0 | 20.0 | 80.0 | 80.0 | 13.3 | 100.0 | 100.0 | 6.67 |
| Computor-Control-MCP | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 86.7 |
| Desktop-Commander | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
| Filesystem | 63.5 | 63.5 | 31.3 | 69.7 | 69.7 | 26.0 | 83.3 | 83.3 | 42.7 |
|Github | 92.0 | 80.0 | 58.0 | 80.5 | 50.0 | 27.7 | 62.8 | 25.7 | 17.1 |
| Gaode | 71.1 | 55.6 | 17.8 | 68.8 | 46.6 | 24.4 | 68.9 | 46.7 | 15.6 |
| MCP-Code-Executor | 85.0 | 80.0 | 70.0 | 80.0 | 80.0 | 70.0 | 90.0 | 90.0 | 65.0 |
| MCP-Docx | 95.8 | 86.7 | 67.1 | 94.9 | 81.6 | 60.1 | 95.1 | 86.6 | 76.1 |
| PPT | 72.6 | 49.8 | 40.9 | 85.9 | 50.7 | 37.5 | 91.2 | 72.1 | 56.7 |
| PPTx | 64.2 | 53.7 | 13.4 | 91.0 | 68.6 | 20.9 | 91.0 | 58.2 | 26.9 |
| Simple-Time-Server | 90.0 | 70.0 | 70.0 | 90.0 | 90.0 | 90.0 | 90.0 | 60.0 | 60.0 |
| Slack | 100.0 | 90.0 | 70.0 | 100.0 | 100.0 | 65.0 | 100.0 | 100.0 | 100.0 |
| Whisper | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 30.0 |
| **Average** | **80.2** | **70.2** | **49.1** | **83.5** | **67.7** | **43.8** | **88.3** | **76.1** | **51.2** |
#### MiniCPM Intel AIPC Client: A New Edge Large Model Powerhouse
Developed in collaboration between Mianbi Intelligence and Intel, the MiniCPM Intel AIPC Client is an edge large model client specially designed for devices equipped with Intel Core Ultra series processors. It delivers a low-latency, high-efficiency, and privacy-preserving local large model experience for developers, researchers, and AI enthusiasts. Its core features include:
### Key Features
- Deep Intel Hardware Adaptation
Fully compatible with Intel Core Ultra series processors, enabling deep integration with hardware to unleash peak performance. Users can run large models smoothly on local devices without relying on cloud services.
- Extreme Optimization Based on OpenVINO
Deeply optimized with the OpenVINO inference framework, it significantly boosts inference efficiency, reaching up to **80 tokens per second**. This ensures rapid model response for both quick queries and complex task processing.
- Privacy and Security Assurance
Adopting local deployment, all data processing is completed on the device, eliminating privacy risks from cloud uploads. This provides users with peace of mind, especially for scenarios with high data privacy requirements.
- Catering to Diverse User Groups
Whether for developers chasing cutting-edge technologies, researchers focused on academic studies, or enthusiasts eager to explore AI applications, the MiniCPM Intel AIPC Client enables easy access to the power of local large models, opening the door to personalized AI exploration.
### System Requirements
- Recommended processor: Intel Core Ultra 7 or higher (mobile version)
- Recommended RAM: 32GB or above
### Download
[download](https://github.com/OpenBMB/MiniCPM/releases/tag/2.4.2)
### Inference
#### CPM.cu
We **recommend** using [CPM.cu](https://github.com/OpenBMB/CPM.cu) for the inference of MiniCPM4. CPM.cu is a CUDA inference framework developed by OpenBMB, which integrates efficient sparse, speculative sampling, and quantization techniques, fully leveraging the efficiency advantages of MiniCPM4.
You can install CPM.cu by running the following command:
```bash
git clone https://github.com/OpenBMB/CPM.cu.git --recursive
cd CPM.cu
python3 setup.py install
```
You can run the following command to test the speed of the model.
```bash
python3 tests/long_prompt_gen.py # generate prompt.txt
python3 tests/test_generate.py --prompt-file prompt.txt
```
For more details about CPM.cu, please refer to the repo of [CPM.cu](https://github.com/OpenBMB/CPM.cu).
#### HuggingFace
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM4-8B'
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)
# User can directly use the chat interface
# responds, history = model.chat(tokenizer, "Write an article about Artificial Intelligence.", temperature=0.7, top_p=0.7)
# print(responds)
# User can also use the generate interface
messages = [
{"role": "user", "content": "Write an article about Artificial Intelligence."},
]
prompt_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([prompt_text], return_tensors="pt").to(device)
model_outputs = model.generate(
**model_inputs,
max_new_tokens=1024,
top_p=0.7,
temperature=0.7
)
output_token_ids = [
model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs['input_ids']))
]
responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
print(responses)
```
This model supports InfLLM v2, a sparse attention mechanism designed for efficient long-sequence inference. It requires the [infllmv2_cuda_impl](https://github.com/OpenBMB/infllmv2_cuda_impl) library.
You can install it by running the following command:
```bash
git clone -b feature_infer https://github.com/OpenBMB/infllmv2_cuda_impl.git
cd infllmv2_cuda_impl
git submodule update --init --recursive
pip install -e . # or python setup.py install
```
To enable InfLLM v2, you need to add the `sparse_config` field in `config.json`:
```json
{
...,
"sparse_config": {
"kernel_size": 32,
"kernel_stride": 16,
"init_blocks": 1,
"block_size": 64,
"window_size": 2048,
"topk": 64,
"use_nope": false,
"dense_len": 8192
}
}
```
These parameters control the behavior of InfLLM v2:
* `kernel_size` (default: 32): The size of semantic kernels.
* `kernel_stride` (default: 16): The stride between adjacent kernels.
* `init_blocks` (default: 1): The number of initial blocks that every query token attends to. This ensures attention to the beginning of the sequence.
* `block_size` (default: 64): The block size for key-value blocks.
* `window_size` (default: 2048): The size of the local sliding window.
* `topk` (default: 64): The specifies that each token computes attention with only the top-k most relevant key-value blocks.
* `use_nope` (default: false): Whether to use the NOPE technique in block selection for improved performance.
* `dense_len` (default: 8192): Since Sparse Attention offers limited benefits for short sequences, the model can use standard (dense) attention for shorter texts. The model will use dense attention for sequences with a token length below `dense_len` and switch to sparse attention for sequences exceeding this length. Set this to `-1` to always use sparse attention regardless of sequence length.
Minicpm4 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques for effective handling of long texts. We have validated the model's performance on context lengths of up to 131,072 tokens by modifying the LongRoPE factor.
You can apply the LongRoPE factor modification by modifying the model files. Specifically, in the `config.json` file, adjust the `rope_scaling` fields.
```json
{
...,
"rope_scaling": {
"rope_type": "longrope",
"long_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"short_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"original_max_position_embeddings": 32768
}
}
```
#### vLLM
- Install vLLM
Reference vLLM [official repository](https://github.com/vllm-project/vllm), install the latest version through *source code*.
```
pip install -U vllm \
--pre \
--extra-index-url https://wheels.vllm.ai/nightly
```
- Inference MiniCPM4-8B with vLLM:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "openbmb/MiniCPM4-8B"
prompt = [{"role": "user", "content": "Please recommend 5 tourist attractions in Beijing. "}]
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
llm = LLM(
model=model_name,
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
)
sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024, repetition_penalty=1.02)
outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
- Use Eagle Speculative Decoding in vLLM: initialize the inference engine as follows.
```python
llm = LLM(
model=model_name,
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
speculative_config={
"method": "eagle",
"model": "openbmb/MiniCPM4-8B-Eagle-vLLM",
"num_speculative_tokens": 2,
"max_model_len": 32768,
},
)
```
- Inference quantized MiniCPM4-8B: initialize the inference engine as follows.
```python
llm = LLM(
model="openbmb/MiniCPM4-8B-marlin-vLLM",
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
)
```
- Use Eagle Speculative Decoding for quantized MiniCPM4-8B: initialize the inference engine as follows.
```python
llm = LLM(
model="openbmb/MiniCPM4-8B-marlin-vLLM",
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
speculative_config={
"method": "eagle",
"model": "openbmb/MiniCPM4-8B-marlin-Eagle-vLLM",
"num_speculative_tokens": 2,
"max_model_len": 32768,
},
)
```
> **Note**: If you're using an OpenAI-compatible server in vLLM, the `chat` API sets `add_special_tokens=False` by default. This will result in missing special tokens—such as the beginning-of-sequence (BOS) token—which are required for proper prompt formatting in **MiniCPM4**. To ensure correct behavior, you must explicitly set `extra_body={"add_special_tokens": True}` in your API call, like below:
```python
import openai
client = openai.Client(base_url="http://localhost:8000/v1", api_key="EMPTY")
response = client.chat.completions.create(
model="openbmb/MiniCPM4-8B",
messages=[
{"role": "user", "content": "Write an article about Artificial Intelligence."},
],
temperature=0.7,
max_tokens=1024,
extra_body={"add_special_tokens": True}, # Ensures special tokens like BOS are added
)
print(response.choices[0].message.content)
```
#### SGLang
- Install SGLang
Reference SGLang [official repository](https://github.com/sgl-project/sglang), install through *source code*.
```
git clone -b openbmb https://github.com/sgl-project/sglang.git
cd sglang
pip install --upgrade pip
pip install -e "python[all]"
```
- Start inference service
```shell
python -m sglang.launch_server --model openbmb/MiniCPM4-8B --trust-remote-code --port 30000 --chat-template chatml
```
- Then, users can use the chat interface by running the following command:
```python
import openai
client = openai.Client(base_url=f"http://localhost:30000/v1", api_key="None")
response = client.chat.completions.create(
model="openbmb/MiniCPM4-8B",
messages=[
{"role": "user", "content": "Write an article about Artificial Intelligence."},
],
temperature=0.7,
max_tokens=1024,
)
print(response.choices[0].message.content)
```
- Use speculative acceleration
```shell
python3 -m sglang.launch_server --model-path [model] \
--speculative_draft_model_path [draft_model] \
--host 0.0.0.0 --trust-remote-code \
--speculative-algorithm EAGLE --speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2 \
--mem-fraction 0.5
```
## MiniCPM 3.0
<details>
<summary>Click to view details about MiniCPM3.0</summary>
MiniCPM 3.0 is a language model with 4 billion parameters. Compared to MiniCPM 1.0/2.0, it offers more comprehensive features and a significant improvement in overall capabilities. Its performance on most evaluation benchmarks rivals or even surpasses many models with 7B-9B parameters.
* **Supports Function Call🛠️ and Code Interpreter💻**: Achieved SOTA among models with fewer than 9B parameters on the [Berkeley Function Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html), outperforming GLM-4-9B-Chat and Qwen2-7B-Instruct.
* **Exceptional Reasoning Ability🧮**: In terms of math abilities, it outperforms GPT-3.5-Turbo and several 7B-9B models on [MathBench](https://open-compass.github.io/MathBench/). On the highly challenging [LiveCodeBench](https://livecodebench.github.io/), it surpasses Llama3.1-8B-Instruct.
* **Outstanding Instruction-Following in English and Chinese🤖**: Exceeds GLM-4-9B-Chat and Qwen2-7B-Instruct on English instruction following with [IFEval](https://huggingface.co/datasets/google/IFEval) and on Chinese instruction following with [FollowBench-zh](https://huggingface.co/datasets/YuxinJiang/FollowBench).
* **Long Context Capability**: Natively supports 32k context length, with flawless performance. We introduce the [LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce) framework, theoretically enabling processing of context lengths up to infinity. Enhanced by LLMxMapReduce, MiniCPM3-4B achieves performance comparable to GPT-4 and KimiChat on InfiniteBench.
* **RAG Capability**:We release [MiniCPM RAG Suite](https://huggingface.co/collections/openbmb/minicpm-rag-suite-66d976b4204cd0a4f8beaabb). Based on the MiniCPM series models, [MiniCPM-Embedding](https://huggingface.co/openbmb/MiniCPM-Embedding) and [MiniCPM-Reranker](https://huggingface.co/openbmb/MiniCPM-Reranker) achieve SOTA performance on Chinese and Chinese-English cross-lingual retrieval tests. Specifically designed for the RAG scenario, [MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA) outperforms models like Llama3-8B and Baichuan2-13B on multiple tasks, such as open-domain question answering.
### Evaluation Results
#### Comprehensive Evaluation
<table>
<tr>
<td>Benchmarks</td>
<td>Qwen2-7B-Instruct</td>
<td>GLM-4-9B-Chat</td>
<td>Gemma2-9B-it</td>
<td>Llama3.1-8B-Instruct</td>
<td>GPT-3.5-Turbo-0125</td>
<td>Phi-3.5-mini-Instruct(3.8B)</td>
<td>MiniCPM3-4B </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>English</strong></td>
</tr>
<tr>
<td>MMLU</td>
<td>70.5</td>
<td>72.4</td>
<td>72.6</td>
<td>69.4</td>
<td>69.2</td>
<td>68.4</td>
<td>67.2 </td>
</tr>
<tr>
<td>BBH</td>
<td>64.9</td>
<td>76.3</td>
<td>65.2</td>
<td>67.8</td>
<td>70.3</td>
<td>68.6</td>
<td>70.2 </td>
</tr>
<tr>
<td>MT-Bench</td>
<td>8.41</td>
<td>8.35</td>
<td>7.88</td>
<td>8.28</td>
<td>8.17</td>
<td>8.60</td>
<td>8.41 </td>
</tr>
<tr>
<td>IFEVAL (Prompt Strict-Acc.)</td>
<td>51.0</td>
<td>64.5</td>
<td>71.9</td>
<td>71.5</td>
<td>58.8</td>
<td>49.4</td>
<td>68.4 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>Chinese</strong></td>
</tr>
<tr>
<td>CMMLU</td>
<td>80.9</td>
<td>71.5</td>
<td>59.5</td>
<td>55.8</td>
<td>54.5</td>
<td>46.9</td>
<td>73.3 </td>
</tr>
<tr>
<td>CEVAL</td>
<td>77.2</td>
<td>75.6</td>
<td>56.7</td>
<td>55.2</td>
<td>52.8</td>
<td>46.1</td>
<td>73.6 </td>
</tr>
<tr>
<td>AlignBench v1.1</td>
<td>7.10</td>
<td>6.61</td>
<td>7.10</td>
<td>5.68</td>
<td>5.82</td>
<td>5.73</td>
<td>6.74 </td>
</tr>
<tr>
<td>FollowBench-zh (SSR)</td>
<td>63.0</td>
<td>56.4</td>
<td>57.0</td>
<td>50.6</td>
<td>64.6</td>
<td>58.1</td>
<td>66.8 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>Mathematics</strong></td>
</tr>
<tr>
<td>MATH</td>
<td>49.6</td>
<td>50.6</td>
<td>46.0</td>
<td>51.9</td>
<td>41.8</td>
<td>46.4</td>
<td>46.6 </td>
</tr>
<tr>
<td>GSM8K</td>
<td>82.3</td>
<td>79.6</td>
<td>79.7</td>
<td>84.5</td>
<td>76.4</td>
<td>82.7</td>
<td>81.1 </td>
</tr>
<tr>
<td>MathBench</td>
<td>63.4</td>
<td>59.4</td>
<td>45.8</td>
<td>54.3</td>
<td>48.9</td>
<td>54.9</td>
<td>65.6 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>Coding</strong></td>
</tr>
<tr>
<td>HumanEval+</td>
<td>70.1</td>
<td>67.1</td>
<td>61.6</td>
<td>62.8</td>
<td>66.5</td>
<td>68.9</td>
<td>68.3 </td>
</tr>
<tr>
<td>MBPP+</td>
<td>57.1</td>
<td>62.2</td>
<td>64.3</td>
<td>55.3</td>
<td>71.4</td>
<td>55.8</td>
<td>63.2 </td>
</tr>
<tr>
<td>LiveCodeBench v3</td>
<td>22.2</td>
<td>20.2</td>
<td>19.2</td>
<td>20.4</td>
<td>24.0</td>
<td>19.6</td>
<td>22.6 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>Tool Use</strong></td>
</tr>
<tr>
<td>BFCL v2</td>
<td>71.6</td>
<td>70.1</td>
<td>19.2</td>
<td>73.3</td>
<td>75.4</td>
<td>48.4</td>
<td>76.0 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>Overall</strong></td>
</tr>
<tr>
<td>Average</td>
<td>65.3</td>
<td>65.0</td>
<td>57.9</td>
<td>60.8</td>
<td>61.0</td>
<td>57.2</td>
<td><strong>66.3</strong></td>
</tr>
</table>
#### Function Calling
We evaluate the function calling capability of MiniCPM3 on [Berkeley Function Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html). MiniCPM3-4B outperforms several models with 7B-9B parameters on this leaderboard, surpassing GPT-3.5-Turbo-0125.
<table>
<tr>
<td>Model</td>
<td>Overall Accuracy</td>
<td>AST Summary</td>
<td>Exec Summary</td>
<td>Irrelevance Detection</td>
<td>Relevance Detection </td>
</tr>
<tr>
<td>MiniCPM3-4B</td>
<td>76.03%</td>
<td>68.55%</td>
<td>85.54%</td>
<td>53.71%</td>
<td>90.24% </td>
</tr>
<tr>
<td>Llama3.1-8B-Instruct</td>
<td>73.28%</td>
<td>64.61%</td>
<td>86.48%</td>
<td>43.12%</td>
<td>85.37% </td>
</tr>
<tr>
<td>Qwen2-7B-Instruct</td>
<td>71.61%</td>
<td>65.71%</td>
<td>79.57%</td>
<td>44.70%</td>
<td>90.24% </td>
</tr>
<tr>
<td>GLM-4-9B-Chat</td>
<td>70.08%</td>
<td>60.69%</td>
<td>80.02%</td>
<td>55.02%</td>
<td>82.93% </td>
</tr>
<tr>
<td>Phi-3.5-mini-instruct</td>
<td>48.44%</td>
<td>38.89%</td>
<td>54.04%</td>
<td>46.78%</td>
<td>65.85% </td>
</tr>
<tr>
<td>Gemma2-9B-it</td>
<td>19.18%</td>
<td>5.41%</td>
<td>18.50%</td>
<td>88.88%</td>
<td>7.32%</td>
</tr>
</table>
#### Long Context Capability
In the [Needle in a Haystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) test with a context length of 32k, the results are shown as follows:
![needle](assets/eval_needle.jpeg)
We also propose a divide-and-conquer long-sequence processing framework [LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce) to support text with any length. MiniCPM3xMapReduce can achieve comparable performance with GPT-4 and KimiChat.
| | Context length| Qwen2-70b | Kimi-Chat(2024.06) | GPT-4 (From InfiniteBench) | MiniCPM 3.0 x MR | Qwen2-70b x MR | Llama3-70bx MR |
| ----------------------------- | ---------- | --------- | ------------------ | -------------------------- | --------------- | ------------ | ------------- |
| Math.Find | 87.9k | 59.71% | 18.57% | 60.00% | 83.43% | 54.29% | **91.43%** |
| Retrieve.KV | 89.9k | 29.00% | 69.20% | 89.00% | 93.80% | 98.80% | **98.89%** |
| En.Dia | 103.6K | 23.00% | 23.00% | 7.50% | 12.50% | **46.50%** | 17.50% |
| Code.Debug | 114.7k | 45.43% | 38.32% | 54.31% | 25.63% | 54.82% | **62.94%** |
| Retrieve.Number | 122.4k | **100.00%** | 97.45% | **100.00%** | 99.32% | **100.00%** | 99.79% |
| Retrieve.PassKey | 122.4k | **100.00%** | 99.32% | **100.00%** | 98.81% | **100.00%** | **100.00%** |
| En.Sum | 171.5K | 31.85% | 29.94% | 14.73% | 25.89% | **32.39%** | 30.63% |
| En.MC | 184.4k | 81.66% | 79.91% | 68.12% | 66.38% |**83.84%** | 82.10% |
| En.QA | 192.6k | 21.97% | 18.80% | 22.44% | 28.39% | 23.13% | **34.70%** |
| Zh.QA | 2068.6k | 21.40% | 19.84% | **25.96%** | 23.66% | 19.10% | N/A |
| avg w/o Zh.QA | / | 51.92% | 52.96% | 55.33% | 59.29% | 64.98% | **68.64%** |
| avg | / | 48.86% | 49.65% | 52.39% | 55.55% | **60.39%** | N/A |
### Inference
#### Huggingface
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM3-4B'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "Write an article about Artificial Intelligence.", temperature=0.7, top_p=0.7)
print(responds)
```
#### SGLang (Recommended)
* Installation
Refer to SGLang [repo](https://github.com/sgl-project/sglang) to install the latest version *via source code*.
* Launch a server
```shell
python -m sglang.launch_server --model openbmb/MiniCPM3-4B --trust-remote-code --port 30000 --chat-template chatml
```
* Example code
```python
from sglang import function, system, user, assistant, gen, set_default_backend, RuntimeEndpoint
@function
def multi_turn_question(s, question_1, question_2):
s += user(question_1)
s += assistant(gen("answer_1", max_tokens=1024))
s += user(question_2)
s += assistant(gen("answer_2", max_tokens=1024))
set_default_backend(RuntimeEndpoint("http://localhost:30000"))
state = multi_turn_question.run(
question_1="Introduce artificial intelligence",
question_2="Write an article about it",
)
for m in state.messages():
print(m["role"], ":", m["content"])
```
#### vLLM
* Install vllm
```shell
pip install "vllm>=0.6.2"
```
* Inference
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "openbmb/MiniCPM3-4B"
prompt = [{"role": "user", "content": "Write an article about Artificial Intelligence."}]
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
llm = LLM(model=model_name,
trust_remote_code=True,
tensor_parallel_size=1
)
sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024)
outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
#### llama.cpp
We have provided the [GGUF formats]((https://huggingface.co/openbmb/MiniCPM3-4B-GGUF)) of MiniCPM3, which can be used in llama.cpp.
* Install llama.cpp
```shell
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
```
* Inference
```shell
./llama-cli -c 1024 -m minicpm3-4b-fp16.gguf -n 1024 --top-p 0.7 --temp 0.7 --prompt "<|im_start|>user\nWrite an article about Artificial Intelligence.<|im_end|>\n<|im_start|>assistant\n"
```
### Fine-Tuning
#### LLaMA-Factory
We have supported fine-tuning MiniCPM3 using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). For usage instructions, refer to [LLaMA-Factory Fine-tuning](https://modelbest.feishu.cn/docx/Z7USdW4lloZzkZxQ14icJ3senjb?from=from_copylink)."
### Advanced Features
We use [vLLM](#vllm) in the example code for the following advanced features.
#### Function calling
We provide example code for using function calls with MiniCPM3:
```bash
cd demo/minicpm3/function_call
python function_call.py
```
If you want to start a function call service, use the following commands:
```bash
cd demo/minicpm3/function_call
pip install -r requirements.txt
python openai_api_server.py \
--model openbmb/MiniCPM3-4B \
--served-model-name MiniCPM3-4B \
--chat-template chatml.jinja \
--dtype auto \
--api-key token-abc123 \
--tensor-parallel-size 1 \
--trust-remote-code
```
Below is a demo of using a search engine to answer the question:
![function_call](./assets/function_call.gif)
#### Code Interpreter
We provide example code for using the code interpreter with MiniCPM3:
```bash
cd demo/minicpm3/code_interpreter
pip install -r requirements.txt
python code_interpreter.py openbmb/MiniCPM3-4B
```
Below is an example of using the code interpreter to generate a QR code:
![code_interpreter](./assets/code_interpreter.gif)
</details>
## MiniCPM 2.0
<details>
<summary>Click to view details about MiniCPM2.0</summary>
### Introdution
MiniCPM 2.0 series upgrade MiniCPM in multiple dimensions, including:
- [MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k):Extend the length of MiniCPM-2B context window to 128k, outperform larger models such as ChatGLM3-6B-128k、Yi-6B-200k on InfiniteBench.
- [MiniCPM-MoE-8x2B](https://huggingface.co/openbmb/MiniCPM-MoE-8x2B):Upcycling from MiniCPM-2B. Compared to MiniCPM-2B, the overall performance improves by an average of 4.5pp.
- [MiniCPM-1B](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16): 60% inference cost reduction compared with MiniCPM-2B, while still showing better overall performance than LLaMA2-13B.
- [MiniCPM-S-1B](https://huggingface.co/openbmb/MiniCPM-S-1B-sft): The FFN layer achieves an average sparsity of 87.89% and reduces FFN FLOPs by 84%, while maintaining no performance loss in downstream tasks. Combined with the PowerInfer, MiniCPM-S-1B inferece speed increase is approximately 2.8x.
### Evaluation Results
#### MiniCPM-2B-128k
| Model | avg | avg w/o code&math | passkey | number_string | kv_retrieval | longbook_choice_eng | longbook_qa_chn | longbook_qa_eng | longbook_sum_eng | longdialogue_qa_eng | math_calc | math_find | code_debug | code_run |
|-------------------------------------|-------|-------------------|---------|---------------|--------------|---------------------|-----------------|-----------------|------------------|---------------------|-----------|-----------|------------|----------|
| LWM-Text-128k | 24.45 | 33.62 | 100 | 97.8 | 0.6 | 28.82 | 15.93 | 14.31 | 9.99 | 1.5 | 0 | 3.43 | 20.05 | 1 |
| Yarn-Mistral-7b-128k | 19.84 | 27.36 | 92.71 | | 0 | 27.95 | 15.49 | 9.55 | 9.06 | 7.5 | 0 | 17.14 | 0.76 | 1.25 |
| Mistral-7B-Instruct-v0.2(ABF 1000w) | 27.75 | 36.9 | 100 | 78.98 | 3.6 | 37.12 | 11.74 | 17.37 | 21.12 | 9.5 | 0 | 29.43 | 17.51 | 0 |
| Yi-6B-200k | 22.15 | 32.54 | 100 | 94.92 | 0 | 36.68 | 15.07 | 9.2 | 0.92 | 3.5 | 0 | 4.29 | 0.51 | 0.75 |
| chatglm3-6b-128k | 25.58 | 36.57 | 89.93 | 99.66 | 5.2 | 46.29 | 10.7 | 8.38 | 25.91 | 6.5 | 0 | 8 | 5.33 | 1 |
| MiniCPM-2.4B-128k | 27.32 | 37.68 | 98.31 | 99.83 | 9 | 29.69 | 23.06 | 16.33 | 15.73 | 9.5 | 0 | 4.29 | 22.08 | 0 |
#### MiniCPM-MoE-8x2B
<div align="left">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th nowrap="nowrap" >BBH</th>
<th nowrap="nowrap" >MMLU</th>
<th nowrap="nowrap" >CEval</th>
<th nowrap="nowrap" >CMMLU</th>
<th nowrap="nowrap" >HumanEval</th>
<th nowrap="nowrap" >MBPP&dagger;</th>
<th nowrap="nowrap" >GSM8K</th>
<th nowrap="nowrap" >MATH</th
</tr>
</thead>
<tbody align="center">
<tr>
<td nowrap="nowrap" align="left">Llama2-34B*</td>
<td>44.1</td>
<td>62.6</td>
<td>-</td>
<td>-</td>
<td>22.6</td>
<td>33.0</td>
<td>42.2</td>
<td>6.24</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Mistral-7B-Instruct-v0.2</td>
<td>39.81</td>
<td>60.51</td>
<td>42.55</td>
<td>41.92</td>
<td>36.59</td>
<td>39.63</td>
<td>40.49</td>
<td>4.95</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Gemma-7B*</td>
<td>55.1</td>
<td>64.3</td>
<td>-</td>
<td>-</td>
<td>32.3</td>
<td>44.4</td>
<td>46.4</td>
<td>24.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen1.5-7B*</td>
<td>40.2</td>
<td>61</td>
<td>74.1</td>
<td>73.1</td>
<td>36</td>
<td>37.4</td>
<td>62.5</td>
<td>20.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Deepseek-MoE(16B)*</td>
<td>-</td>
<td>45.0</td>
<td>40.6</td>
<td>42.5</td>
<td>26.8</td>
<td>39.2</td>
<td>18.8</td>
<td>4.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>MiniCPM-2.4B</b></td>
<td>36.87</td>
<td>53.46</td>
<td>51.13</td>
<td>51.07</td>
<td>50.00</td>
<td>35.93</td>
<td>53.83</td>
<td>10.24</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>MiniCPM-MoE-8x2B</b></td>
<td>39.22</td>
<td>58.90</td>
<td>58.11</td>
<td>58.80</td>
<td>55.49</td>
<td>41.68</td>
<td>61.56</td>
<td>10.52</td>
</tr>
</tbody>
</table>
</div>
Note:* means evaluation results are directly taken from their technical reports. &dagger; means evaluation results on the full set of
MBPP, instead of the hand-verified set.
#### MiniCPM-S-1B
- Code Generation:Average pass@1 score of HumanEval(0-shot) and MBPP(3-shot).
- Commonsense Reasoning: Average 0-shot accuracy of PIQA, SIQA, HellaSwag, WinoGrande and COPA.
- Reading Comprehension: Average 0-shot accuracy of BoolQ, LAMBADA and TyDi-QA.
- Other Benchmarks: We report average performance of GSM8K(8-shot)、MMLU(5-shot)、BBH(3-shot) and AGI-Eval(0-shot).
| Setting | Average<br>Sparsity | Average<br>Performance | Code<br>Generation | Commonsense<br>Reasoning | Reading<br>Comprehension | GSM8K | MMLU | BBH | AGI-Eval |
| :-------------------: | :----------------: | :----------------------: | :----------------------: | :---: | :---: | :---: | :---------: | :-----: | :-----------------: |
| LLaMA2-7B | - | 37.96 | 16.37 | 69.59 | 61.87 | 12.96 | 44.45 | 32.96 | 27.53 |
| ReluLLaMA-7B | 66.98 | 37.62 | 15.85 | 69.64 | 70.54 | 5.84 | 38.64 | 35.07 | 27.73 |
| **ProSparse-7B**\* | 88.11 | 38.31 | 19.47 | 66.29 | 63.33 | 12.74 | 45.21 | 33.59 | 27.55 |
| **ProSparse-7B** | **89.32** | **38.46** | 19.42 | 66.27 | 63.50 | 12.13 | 45.48 | 34.99 | 27.46 |
| LLaMA2-13B | - | 44.06 | 20.19 | 72.58 | 71.55 | 22.21 | 54.69 | 37.89 | 29.33 |
| ReluLLaMA-13B | 71.56 | 42.74 | 20.19 | 70.44 | 73.29 | 18.50 | 50.58 | 37.97 | 28.22 |
| **ProSparse-13B**\* | 87.97 | **45.07** | 29.03 | 69.75 | 67.54 | 25.40 | 54.78 | 40.20 | 28.76 |
| **ProSparse-13B** | **88.80** | 44.90 | 28.42 | 69.76 | 66.91 | 26.31 | 54.35 | 39.90 | 28.67 |
| MiniCPM-1B | - | 44.44 | 36.85 | 63.67 | 60.90 | 35.48 | 50.44 | 35.03 | 28.71 |
| **MiniCPM-S-1B**\* | 86.25 | **44.72** | 41.38 | 64.55 | 60.69 | 34.72 | 49.36 | 34.04 | 28.27 |
| **MiniCPM-S-1B** | **87.89** | **44.72** | 42.04 | 64.37 | 60.73 | 34.57 | 49.51 | 34.08 | 27.77 |
Note:
1. [ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) and [ReluLLaMA-13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B). "ProSparse-7B\*"、"ProSparse-13B\*" and "MiniCPM-S-1B\*" represent ProSparse versions that don't have activation thresholds offset.
2. For PIQA, SIQA, HellaSwag, WinoGrande, COPA, BoolQ, LAMBADA, TyDi QA and AGI-Eval, we adopt ppl-based evaluation. For GSM8K, MMLU and BBH, we perform generation-based evaluation.
### Inference
#### HuggingFace, vLLM
Please refer to [Inference](#huggingface-inferene) section in MiniCPM1.0.
#### PowerInfer
Currently, PowerInfer is exclusively tailored for the MiniCPM-S-1B model; support for other versions is not yet available, stay tuned.
1. Ensure your cmake version is 3.17 or above. If you have already installed it, you can skip this step.
```bash
# Download the installation package
sudo wget https://cmake.org/files/v3.23/cmake-3.23.0.tar.gz
# Extract the installation package
sudo tar -zxvf cmake-3.23.0.tar.gz
# Configure the installation environment
sudo ./configure
sudo make -j8
# Compile and install
sudo make install
# Check the version after installation
cmake --version
# If the version number is returned, the installation was successful
# cmake version 3.23.0
```
2. Install PowerInfer::
```bash
git clone https://github.com/SJTU-IPADS/PowerInfer
cd PowerInfer
pip install -r requirements.txt # install Python helpers' dependencies
```
3. Compile the CPU version of PowerInfer. If your machine only has a CPU, or if you want to perform inference using the CPU, run the following commands::
```bash
cmake -S . -B build
cmake --build build --config Release
```
4. Compile the GPU version of PowerInfer. If your machine has a GPU, you can run the following commands:
```bash
cmake -S . -B build -DLLAMA_CUBLAS=ON
cmake --build build --config Release
```
5. Retrieve the sparse model:
```bash
git clone https://huggingface.co/openbmb/MiniCPM-S-1B-sft-gguf/tree/main
#or
git clone https://modelscope.cn/models/OpenBMB/MiniCPM-S-1B-sft-gguf
```
6. Model Inference:
```bash
cd PowerInfer
# Below is the command template. output_token_count refers to the maximum output tokens, thread_num is the number of threads, and prompt is the input prompt text.
#./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt
# Below is an example
./build/bin/main -m /root/ld/ld_model_pretrain/1b-s-minicpm/MiniCPM-S-1B-sft.gguf -n 2048 -t 8 -p '<User>hello,tell me a story please.<AI>'
```
</details>
## MiniCPM 1.0
<details>
<summary>Click to view details about MiniCPM1.0</summary>
### Introduction
MiniCPM-2B is a dense language model with only 2.4B parameters excluding embeddings (2.7B in total).
- MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathematics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc.
- After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench.
Note: To ensure the generality of the model for academic research purposes, **we have not subject it to any identity-specific training.** Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity-related information similar to the GPT series models.
### Evaluation Results
#### Evaluation Settings
* Since it is difficult to standardize the evaluation of LLMs and there is no public prompt and test code for a large number of evaluations, we can only try our best to make it suitable for all types of models in terms of specific evaluation methods.
* Overall, we use a unified prompt input for testing, and adjust the input according to the corresponding template for each model.
* **The evaluation scripts and prompts have been open-sourced in our Github repository, and we welcome more developers to continuously improve our evaluation methods.**
* For the text evaluation part, we use our open source large model capability evaluation framework [UltraEval](https://github.com/OpenBMB/UltraEval). The following is the open source model reproduction process:
* install UltraEval
```shell
git clone https://github.com/OpenBMB/UltraEval.git
cd UltraEval
pip install -e .
```
* Download the relevant data and unzip it for processing
```shell
wget -O RawData.zip "https://cloud.tsinghua.edu.cn/f/71b5232264ae4833a4d0/?dl=1"
unzip RawData.zip
python data_process.py
```
* Execute evaluation scripts (templates are provided and can be customized)
```shell
bash run_eval.sh
```
#### Deployment mode
* Because MiniCPM uses the structure of Mup, which is slightly different from existing models in terms of specific computations, we have based the implementation of our model on the vllm=0.2.2 version.
* **For non-MiniCPM models, we directly sampled the latest version of vllm=0.2.7 for inference.**
#### Evaluation method
* For the QA task (multiple-choice task), we chose to test in two ways:
* PPL: The options are used as a continuation of the question generation and the answer selection is based on the PPL of each option;
* The second is to generate the answer options directly.
* For different models, the results obtained by these two approaches vary widely. the results on both MiniCPM models are closer, while models such as Mistral-7B-v0.1 perform better on PPL and worse on direct generation.
* In the specific evaluation, we take the higher score of the two evaluation methods as the final result, so as to ensure the fairness of the comparison (* in the following table indicates the PPL).
#### Text evaluation
|Model|Average Score|Average Score in English|Average Score in Chinese|C-Eval|CMMLU|MMLU|HumanEval|MBPP|GSM8K|MATH|BBH|ARC-E|ARC-C|HellaSwag|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Llama2-7B|35.40|36.21|31.765|32.42|31.11|44.32|12.2|27.17|13.57|1.8|33.23|75.25|42.75|75.62*|
|Qwen-7B|49.46|47.19|59.655|58.96|60.35|57.65|17.07|42.15|41.24|5.34|37.75|83.42|64.76|75.32*|
|Deepseek-7B|39.96|39.15|43.635|42.82|44.45|47.82|20.12|41.45|15.85|1.53|33.38|74.58*|42.15*|75.45*|
|Mistral-7B|48.97|49.96|44.54|46.12|42.96|62.69|27.44|45.2|33.13|5.0|41.06|83.92|70.73|80.43*|
|Llama2-13B|41.48|42.44|37.19|37.32|37.06|54.71|17.07|32.55|21.15|2.25|37.92|78.87*|58.19|79.23*|
|MPT-30B|38.17|39.82|30.715|29.34|32.09|46.56|21.95|35.36|10.31|1.56|38.22|78.66*|46.08*|79.72*|
|Falcon-40B|43.62|44.21|40.93|40.29|41.57|53.53|24.39|36.53|22.44|1.92|36.24|81.94*|57.68|83.26*|
|MiniCPM-2B|52.33|52.6|51.1|51.13|51.07|53.46|50.00|47.31|53.83|10.24|36.87|85.44|68.00|68.25|
|Model|Average Score|Average Score in English|Average Score in Chinese|C-Eval|CMMLU|MMLU|HumanEval|MBPP|GSM8K|MATH|BBH|ARC-E|ARC-C|HellaSwag|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|TinyLlama-1.1B|25.36|25.55|24.525|25.02|24.03|24.3|6.71|19.91|2.27|0.74|28.78|60.77*|28.15*|58.33*|Qwen-1.8B|34.72|31.87|47.565|49.81|45.32|43.37|7.93|17.8|19.26|2.42|29.07|63.97*|43.69|59.28*|
|Qwen-1.8B|34.72|31.87|47.565|49.81|45.32|43.37|7.93|17.8|19.26|2.42|29.07|63.97*|43.69|59.28*|
|Gemini Nano-3B|-|-|-|-|-|-|-|27.2(report)|22.8(report)|-|42.4(report)|-|-|-|
|StableLM-Zephyr-3B|43.46|46.31|30.615|30.34|30.89|45.9|35.37|31.85|52.54|12.49|37.68|73.78|55.38|71.87*|
|Phi-2-2B|48.84|54.41|23.775|23.37|24.18|52.66|47.56|55.04|57.16|3.5|43.39|86.11|71.25|73.07*|
|MiniCPM-2B|52.33|52.6|51.1|51.13|51.07|53.46|50.00|47.31|53.83|10.24|36.87|85.44|68.00|68.25|
|Model|Average Score|Average Score in English|Average Score in Chinese|C-Eval|CMMLU|MMLU|HumanEval|MBPP|GSM8K|MATH|BBH|ARC-E|ARC-C|HellaSwag|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|ChatGLM2-6B|37.98|35.17|50.63|52.05|49.21|45.77|10.37|9.38|22.74|5.96|32.6|74.45|56.82|58.48*|
|Mistral-7B-Instruct-v0.1|44.36|45.89|37.51|38.06|36.96|53.56|29.27|39.34|28.73|3.48|39.52|81.61|63.99|73.47*|
|Mistral-7B-Instruct-v0.2|50.91|52.83|42.235|42.55|41.92|60.51|36.59|48.95|40.49|4.95|39.81|86.28|73.38|84.55*|
|Qwen-7B-Chat|44.93|42.05|57.9|58.57|57.23|56.03|15.85|40.52|42.23|8.3|37.34|64.44*|39.25*|74.52*|
|Yi-6B-Chat|50.46|45.89|70.995|70.88|71.11|62.95|14.02|28.34|36.54|3.88|37.43|84.89|70.39|74.6*|
|Baichuan2-7B-Chat|44.68|42.74|53.39|53.28|53.5|53|21.34|32.32|25.25|6.32|37.46|79.63|60.15|69.23*|
|Deepseek-7B-chat|49.34|49.56|48.335|46.95|49.72|51.67|40.85|48.48|48.52|4.26|35.7|76.85|63.05|76.68*|
|Llama2-7B-Chat|38.16|39.17|33.59|34.54|32.64|47.64|14.02|27.4|21.15|2.08|35.54|74.28|54.78|75.65*|
|MiniCPM-2B|52.33|52.6|51.1|51.13|51.07|53.46|50.00|47.31|53.83|10.24|36.87|85.44|68.00|68.25|
#### DPO evaluation
|Model|MT-bench|
|---|---|
|GPT-4-turbo|9.32|
|GPT-3.5-turbo|8.39|
|Mistral-8*7b-Instruct-v0.1|8.30|
|Claude-2.1|8.18|
|Zephyr-7B-beta|7.34|
|**MiniCPM-2B**|**7.25**|
|Vicuna-33B|7.12|
|Zephyr-7B-alpha|6.88|
|LLaMA-2-70B-chat|6.86|
|Mistral-7B-Instruct-v0.1|6.84|
|MPT-34B-instruct|6.39|
### Quick Start
#### Online
- [Colab](https://colab.research.google.com/drive/1tJcfPyWGWA5HezO7GKLeyeIso0HyOc0l?usp=sharing)
#### Web-demo based on Gradio
Using the following command can launch the gradio-based demo.
```shell
# generation powered by vllm
python demo/minicpm/vllm_based_demo.py --model_path <vllmcpm_repo_path>
# generation powered by huggingface
python demo/minicpm/hf_based_demo.py --model_path <hf_repo_path>
```
#### Huggingface Inferene
##### MiniCPM-2B
Install `transformers>=4.36.0` and `accelerate`,run the following python code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-dpo-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "Which city is the capital of China?", temperature=0.8, top_p=0.8)
print(responds)
```
##### MiniCPM-2B (Llama Format)
To facilitate ease of use, we have converted the model weights of MiniCPM to adapt to the structure of the LLaMA model:
```python
import torch
from transformers import LlamaTokenizerFast, LlamaForCausalLM
model_path = "openbmb/MiniCPM-2B-dpo-bf16-llama-format"
tokenizer = LlamaTokenizerFast.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
prompt="Now you act like a terminal situated within a beginner's C++ practice repository folder, please provide the output for the command: `ls -l`"
input_ids = tokenizer.encode("<User>{}<AI>".format(prompt), return_tensors='pt', add_special_tokens=True).cuda()
responses = model.generate(input_ids, temperature=0.3, top_p=0.8, repetition_penalty=1.02, max_length=1024)
responses = tokenizer.decode(responses[0], skip_special_tokens=True)
print(responses)
```
#### vLLM Inference
Install [vLLM](https://github.com/vllm-project/vllm).
```shell
pip install "vllm>=0.4.1"
```
See [here](#vllm) for the inference code.
#### SGLang Inference
Install [SGLang](https://github.com/sgl-project/sglang).
* First, launch a server:
```bash
python -m sglang.launch_server --model-path openbmb/MiniCPM-2B-dpo-fp16 --trust-remote-code --port 30000
```
* You can use it for inference as shown below:
```python
from sglang import function, gen, set_default_backend, RuntimeEndpoint
@function
def text_qa(s, question):
s += "<User>" + question + "<AI>"
s += gen("answer", max_tokens=1024, temperature=0.7, top_p=0.7)
set_default_backend(RuntimeEndpoint("http://localhost:30000"))
state = text_qa.run(
question="What is the capital of China?",
)
print(state["answer"])
```
#### llama.cpp, Ollama, fastllm, mlx_lm Inference
We have supported inference with [llama.cpp](https://github.com/ggerganov/llama.cpp/), [ollama](https://github.com/ollama/ollama), [fastllm](https://github.com/ztxz16/fastllm), [mlx_lm](https://github.com/ml-explore/mlx-examples). Thanks to [@runfuture](https://github.com/runfuture) for the adaptation of llama.cpp and ollama.
Please refer to [Edge Deployment Tutorial](https://modelbest.feishu.cn/wiki/VL5kw9DsEiRDmJkEyTUcydE0nie).
#### Quantization
Please refer to [Quantization Tutorial](https://modelbest.feishu.cn/wiki/EatbwdLuvitbbMk2X5wcX6h5n7c).
#### Fine-Tuning
* With parameter-efficient tuning, we can tune MiniCPM using one piece of NVIDIA GeForce GTX 1080/2080: [code](https://github.com/OpenBMB/MiniCPM/tree/main/finetune).
* mlx finetune: [Guideline](https://modelbest.feishu.cn/wiki/AIU3wbREcirOm9kkvd7cxujFnMb#share-ASrDdvFAloHtycxfy85cLNhAnd3)
- [xtuner](https://github.com/InternLM/xtuner): [The best choice to do parameter-efficient tuning on MiniCPM](https://modelbest.feishu.cn/wiki/AIU3wbREcirOm9kkvd7cxujFnMb#AMdXdzz8qoadZhxU4EucELWznzd)
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory.git)[One click solution of finetuning MiniCPM](https://modelbest.feishu.cn/wiki/AIU3wbREcirOm9kkvd7cxujFnMb#BAWrdSjXuoFvX4xuIuzc8Amln5E)
</details>
## LICENSE
#### Model LICENSE
* This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
#### Statement
* As a language model, MiniCPM generates content by learning from a vast amount of text.
* However, it does not possess the ability to comprehend or express personal opinions or value judgments.
* Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
* Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
## Institutions
This project is developed by the following institutions:
- <img src="assets/modelbest.png" width="28px"> [Modelbest Inc.](https://modelbest.cn/)
- <img src="assets/thunlp.png" width="28px"> [THUNLP](https://nlp.csai.tsinghua.edu.cn/)
- <img src="assets/RUC.png" width="28px"> [Gaoling School of Artificial Intelligence of RUC](https://linyankai.github.io/)
## Citation
* Please cite our paper: [MiniCPM1](https://arxiv.org/abs/2404.06395) and [MiniCPM4](https://github.com/OpenBMB/MiniCPM/blob/main/report/MiniCPM_4_Technical_Report.pdf) if you find our work valuable.
```
@article{minicpm4,
title={MiniCPM4: Ultra-Efficient LLMs on End Devices},
author={MiniCPM Team},
year={2025}
}
@inproceedings{huminicpm,
title={MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies},
author={Hu, Shengding and Tu, Yuge and Han, Xu and Cui, Ganqu and He, Chaoqun and Zhao, Weilin and Long, Xiang and Zheng, Zhi and Fang, Yewei and Huang, Yuxiang and others},
booktitle={First Conference on Language Modeling},
year={2024}
}
```
# MiniCPM4
速度狂飙,快至220倍!MiniCPM4.0-8B是首个原生稀疏模型,5%的极高稀疏度加持系统级创新技术的大爆发,宣告了端侧长文本时代到来!
## 论文
`MiniCPM4: Ultra-Efficient LLMs on End Devices`
- https://arxiv.org/pdf/2506.07900
## 模型结构
MiniCPM4核心架构基于Transformer Decoder-only,引入InfLLM 2.0混合稀疏注意力结构,采用「高效双频换挡」机制,能够根据任务特征自动切换注意力模式:在处理高难度的长文本、深度思考任务时,启用稀疏注意力以降低计算复杂度,在短文本场景下切换至稠密注意力以确保精度,实现了长、短文本切换的高效响应。
<div align=center>
<img src="./doc/structure.png"/>
</div>
## 算法原理
MiniCPM 4.0模型采用的InfLLMv2稀疏注意力架构改变了传统Transformer模型的相关性计算方式:对分块分区域高效「抽查」,即对文本进行分块分区域处理后,通过智能化选择机制,只需对最有相关性的重点区域进行注意力计算“抽查”,摆脱了逐字重复计算的低效,注意力层仅需1/10的计算量即可完成长文本计算。
<div align=center>
<img src="./doc/Sparse_Attention.png"/>
</div>
## 环境配置
```
mv MiniCPM4_pytorch MiniCPM4
```
### 硬件需求
DCU型号:K100AI,节点数量:1 台,卡数:4 张。
### Docker(方法一)
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.4.1-ubuntu22.04-dtk25.04.1-py3.10
# <your IMAGE ID>为以上拉取的docker的镜像ID替换,本镜像为:e50d644287fd
docker run -it --shm-size=64G -v $PWD/MiniCPM4:/home/MiniCPM4 -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name minicpm4 <your IMAGE ID> bash
cd /home/MiniCPM4
pip install -r requirements.txt # requirements.txt
```
### Dockerfile(方法二)
```
cd /home/MiniCPM4/docker
docker build --no-cache -t minicpm4:latest .
docker run --shm-size=64G --name minicpm4 -v /opt/hyhal:/opt/hyhal:ro --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video -v $PWD/../../MiniCPM4:/home/MiniCPM4 -it minicpm4 bash
# 若遇到Dockerfile启动的方式安装环境需要长时间等待,可注释掉里面的pip安装,启动容器后再安装python库:pip install -r requirements.txt。
```
### Anaconda(方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装:
- https://developer.sourcefind.cn/tool/
```
DTK驱动:25.04.1
python:python3.10
torch:2.4.1
torchvision:0.19.1
triton:3.0.0
flash-attn:2.6.1
deepspeed:0.14.2
apex:1.4.0
transformers:4.53.2
```
不同深度学习库可支持的DCU型号可在此处查询:[DAS资源下载](https://das.sourcefind.cn:55011/portal/#/home)
`Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应。`
2、其它非特殊库参照requirements.txt安装
```
cd /home/MiniCPM4
pip install -r requirements.txt # requirements.txt
```
报错解决:
```
1、TypeError: Phi3LongRoPEScaledRotaryEmbedding._compute_cos_sin_cache() missing 3 required positional arguments: 'max_position_embeddings', 'rescale_factors', and 'mscale'
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/minicpm.py", line 245
下面这一行代码是不必要的,因为RotaryEmbedding构造函数已经处理了适当的缓存初始化,注释掉:
# self.rotary_emb.cos_sin_cache = self.rotary_emb._compute_cos_sin_cache(
# )
2、ValueError: You must use the new past_key_values format, such as the Cache class, instead of the old tuple format.
openbmb/MiniCPM4-8B/modeling_minicpm.py", line 2052
将下面代码:
if use_legacy_cache:
raise ValueError(
'You must use the new past_key_values format, such as the Cache class, instead of the old tuple format.'
)
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
替换成:
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
```
## 数据集
`项目中官方提供了用于试验的示例数据集`
```
/home/MiniCPM4/finetune/data/
├── AdvertiseGenChatML
| ├── train.json
| └── dev.json
└── ocnli_public_chatml
├── train.json
└── dev.json
```
更多资料可参考源项目的[`README_origin`](./README_origin.md)
## 训练
### 单机多卡
预训练权重目录结构:
```
/home/MiniCPM4/
└── openbmb/MiniCPM4-8B
```
```
cd /home/MiniCPM4/finetune
bash lora_finetune_minicpm4.sh # 此处以MiniCPM4-8B为例,其它参数量的模型以此类推。
```
## 推理
### 单机单卡
```
cd /home/MiniCPM4
# 方法一:transformers推理
python infer_transformers.py
# 方法二:vllm推理
python infer_vllm.py # 官方版vLLM目前不支持InfLLM-v2。
# 目前已开放dense推理,投机采样、量化、量化加投机敬请期待后续vllm的适配优化。
```
更多资料可参考源项目的[`README_origin`](./README_origin.md)
## result
此处以vllm版的推理结果示例:
`输入: `
```
推荐5个北京的景点。
```
`输出:`
```
北京,这座历史悠久、文化底蕴深厚的城市,拥有众多令人向往的景点。以下是五个不容错过的北京景点推荐:
1. **故宫博物院**:作为明清两代皇家宫殿,故宫不仅是世界上最大的木质结构建筑群,也是中国乃至世界上最大的古代宫廷博物馆。这里收藏着大量的珍贵文物,如书画、瓷器、玉器等,能够让人近距离感受到中国传统文化的魅力。
2. **长城**:作为中华民族的象征,长城是中国古代军事防御工程的杰出代表。其中,八达岭长城是最为著名的一段,其地势险峻,长城蜿蜒曲折,是游客体验长城雄伟壮观的最佳地点。
3. **天安门广场**:作为世界上最大的城市中心广场,天安门广场不仅是国家的重要政治活动场所,也是游客们欣赏宏伟建筑的好去处。广场上的天安门城楼、人民英雄纪念碑、毛主席纪念堂等,都是历史的见证。
4. **颐和园**:颐和园是中国保存最完整的皇家园林,以其精美的园林艺术和丰富的文化内涵而著称。园内的昆明湖、万寿山、长廊等景点,让人仿佛置身于一幅生动的中国山水画中。
5. **圆明园**:虽然历经劫难,但圆明园的残垣断壁依然透露出清朝皇家园林的辉煌。园内的荷花池、西洋楼遗址等,让人在感叹历史的同时,也能感受到中国园林艺术的精妙。
以上五个景点,不仅能够让人领略到北京深厚的历史文化底蕴,也是来京旅游者必访之地。
```
### 精度
DCU与GPU精度一致,推理框架:vllm,训练中所用数据为少量demo数据,仅供模型训练方法测试,故无法作为训练精度参考。
## 应用场景
### 算法类别
`对话问答`
### 热点应用行业
`制造,广媒,金融,能源,医疗,家居,教育`
## 预训练权重
魔搭社区下载地址为:[OpenBMB/MiniCPM4-8B](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B)
## 源码仓库及问题反馈
- http://developer.sourcefind.cn/codes/modelzoo/MiniCPM4_pytorch.git
## 参考资料
- https://github.com/OpenBMB/MiniCPM.git
<div align="center">
<img src="./assets/minicpm_logo.png" width="500em" ></img>
</div>
<h4 align="center">
<p>
<b>中文</b> | <a href="https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md">English</a>
<p>
</h4>
<p align="center">
<a href="https://arxiv.org/pdf/2506.07900" target="_blank">MiniCPM 论文</a> |
<a href="https://openbmb.vercel.app/?category=Chinese+Blog" target="_blank">MiniCPM 技术博客</a> |
<a href="https://modelbest.feishu.cn/wiki/D2tFw8Pcsi5CIzkaHNacLK64npg" target="_blank">MiniCPM 知识库</a> |
<a href="https://github.com/OpenBMB/MiniCPM-V/" target="_blank">MiniCPM-V 仓库</a> |
加入我们的 <a href="https://discord.gg/3cGQn9b3YM" target="_blank">discord</a><a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">微信群</a> |
<a href="https://mp.weixin.qq.com/s/KIhH2nCURBXuFXAtYRpuXg?poc_token=HBIsUWijxino8oJ5s6HcjcfXFRi0Xj2LJlxPYD9c">加入我们</a>
</p>
https://github.com/user-attachments/assets/ab36fd7a-485b-4707-b72f-b80b5c43d024
## 更新日志🔥
- [2025.06.06] **发布 [MiniCPM4](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b)!该模型在保持同等规模最优性能的同时,实现了极致的效率提升!在典型端侧芯片上能够实现 5 倍以上生成加速!**
- [2024.09.28] [LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce) 开源,支持 MiniCPM3-4B,理论上支持无限长文本输入!
- [2024.09.18] [SGLang](https://github.com/sgl-project/sglang) 已经支持 MiniCPM3-4B (推荐使用)!由于 SGLang v0.3 对 MiniCPM3 中使用的 MLA 结构进行了推理优化,吞吐量相比于 vLLM 提高 70%![[用法](#sglang推荐)]
- [2024.09.16] [llama.cpp](https://github.com/ggerganov/llama.cpp/releases/tag/b3765) 已经官方支持 MiniCPM3-4B![[GGUF模型](https://huggingface.co/openbmb/MiniCPM3-4B-GGUF)|[用法](#llamacpp)]
- [2024.09.05] 发布 [MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B)!该模型的表现超越 Phi-3.5-mini-instruct 和 GPT-3.5-Turbo-0125,并且能够比肩 Llama3.1-8B-Instruct、Qwen2-7B-Instruct、GLM-4-9B-Chat 等多个 7B-9B 参数量的模型。
- [2024.07.09] MiniCPM-2B 已经支持使用 [SGLang](#sglang-推理) 推理!
- [2024.07.05] 发布 [MiniCPM-S-1B](https://huggingface.co/openbmb/MiniCPM-S-1B-sft)!该模型在保持下游任务性能无损的前提下,FFN 层实现了 87.89% 的平均稀疏度,将 FFN FLOPs 降低了 84%。
- [2024.04.11] 发布 [MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k)[MiniCPM-MoE-8x2B](https://huggingface.co/openbmb/MiniCPM-MoE-8x2B)[MiniCPM-1B](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16)!点击[这里](https://openbmb.vercel.app/?category=Chinese+Blog)查看技术博客。
- [2024.03.16] MiniCPM-2B 的 30 余个中间检查点开放了![HuggingFace链接](https://huggingface.co/openbmb/MiniCPM-2B-history)
- [2024.02.01] 发布 [MiniCPM-2B](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)!该模型在公开评测集上与 Mistral-7B 表现相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
## 目录
- [更新日志🔥](#更新日志)
- [目录](#目录)
- [模型下载](#模型下载)
- [MiniCPM 4.0](#minicpm-40)
- [评测结果](#评测结果)
- [效率评测](#效率评测)
- [综合评测](#综合评测)
- [长文本评测](#长文本评测)
- [BitCPM4: 模型量化](#bitcpm4-模型量化)
- [BitCPM4评测](#bitcpm4评测)
- [BitCPM4模型推理](#bitcpm4模型推理)
- [模型应用](#模型应用)
- [MiniCPM4-Survey: 综述生成](#minicpm4-survey-综述生成)
- [MiniCPM4-MCP: MCP增强的工具调用](#minicpm4-mcp-mcp增强的工具调用)
- [MiniCPM Intel AIPC Client: 端侧大模型客户端](#minicpm-intel-aipc-client-端侧大模型客户端)
- [模型推理](#模型推理)
- [CPM.cu](#cpmcu)
- [HuggingFace](#huggingface)
- [vLLM](#vllm)
- [SGLang](#sglang)
- [模型微调](#模型微调)
- [LLaMA-Factory](#llamA-factory)
- [MiniCPM 3.0](#minicpm-30)
- [MiniCPM 2.0](#minicpm-20)
- [MiniCPM 1.0](#minicpm-10)
- [开源协议](#开源协议)
- [开发机构](#开发机构)
- [工作引用](#工作引用)
## 模型下载
| HuggingFace | ModelScope |
|-------------|------------|
| [MiniCPM4-8B](https://huggingface.co/openbmb/MiniCPM4-8B) | [MiniCPM4-8B](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B) |
| [MiniCPM4-0.5B](https://huggingface.co/openbmb/MiniCPM4-0.5B) | [MiniCPM4-0.5B](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-0.5B) |
| [BitCPM4-1B](https://huggingface.co/openbmb/BitCPM4-1B) | [BitCPM4-1B](https://www.modelscope.cn/models/OpenBMB/BitCPM4-1B) |
| [BitCPM4-0.5B](https://huggingface.co/openbmb/BitCPM4-0.5B) | [BitCPM4-0.5B](https://www.modelscope.cn/models/OpenBMB/BitCPM4-0.5B) |
| [MiniCPM4-8B-Eagle-FRSpec](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec) | [MiniCPM4-8B-Eagle-FRSpec](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-Eagle-FRSpec) |
| [MiniCPM4-8B-Eagle-FRSpec-QAT](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec-QAT) | [MiniCPM4-8B-Eagle-FRSpec-QAT](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-Eagle-FRSpec-QAT) |
| [MiniCPM4-8B-Eagle-vLLM](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-vLLM) | [MiniCPM4-8B-Eagle-vLLM](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-Eagle-vLLM) |
| [MiniCPM4-8B-marlin-Eagle-vLLM](https://huggingface.co/openbmb/MiniCPM4-8B-marlin-Eagle-vLLM) | [MiniCPM4-8B-marlin-Eagle-vLLM](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-8B-marlin-Eagle-vLLM) |
| [MiniCPM4-Survey](https://huggingface.co/openbmb/MiniCPM4-Survey) | [MiniCPM4-Survey](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-Survey) |
| [MiniCPM4-MCP](https://huggingface.co/openbmb/MiniCPM4-MCP) | [MiniCPM4-MCP](https://www.modelscope.cn/models/OpenBMB/MiniCPM4-MCP) |
| [MiniCPM4-0.5B-QAT-Int4-unquantized](https://huggingface.co/openbmb/MiniCPM4-0.5B-QAT-Int4-unquantized) | [MiniCPM4-0.5B-QAT-Int4-unquantized](https://modelscope.cn/models/OpenBMB/MiniCPM4-0.5B-QAT-Int4-unquantized) |
| [MiniCPM4-0.5B-QAT-Int4-GPTQ-format](https://huggingface.co/openbmb/MiniCPM4-0.5B-QAT-Int4-GPTQ-format) | [MiniCPM4-0.5B-QAT-Int4-GPTQ-format](https://modelscope.cn/models/OpenBMB/MiniCPM4-0.5B-QAT-Int4-GPTQ-format) |
|[MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B)|[MiniCPM3-4B](https://www.modelscope.cn/models/OpenBMB/MiniCPM3-4B)|
|[MiniCPM-2B-sft](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[MiniCPM-2B-sft](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|
|[MiniCPM-2B-dpo](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[MiniCPM-2B-dpo](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|
|[MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k) |[MiniCPM-2B-128k](https://modelscope.cn/models/openbmb/MiniCPM-2B-128k/summary)|
|[MiniCPM-MoE-8x2B](https://huggingface.co/openbmb/MiniCPM-MoE-8x2B) |[MiniCPM-MoE-8x2B](https://modelscope.cn/models/OpenBMB/MiniCPM-MoE-8x2B)|
|[MiniCPM-1B](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16) | [MiniCPM-1B](https://modelscope.cn/models/OpenBMB/MiniCPM-1B-sft-bf16) |
|[MiniCPM-S-1B](https://huggingface.co/openbmb/MiniCPM-S-1B-sft)|[MiniCPM-S-1B](https://modelscope.cn/models/OpenBMB/MiniCPM-S-1B-sft)|
注: 更多模型版本见[这里](https://huggingface.co/collections/openbmb/minicpm-2b-65d48bf958302b9fd25b698f)
## MiniCPM 4.0
MiniCPM 4 是一个极致高效的端侧大模型,从模型架构、学习算法、训练数据与推理系统四个层面进行了高效优化,实现了极致的效率提升。
- 🏗️ 高效模型架构:
- InfLLM v2 -- 可训练的稀疏注意力机制:采用可训练的稀疏注意力机制架构,在 128K 长文本处理中,每个词元仅需与不足 5% 的词元进行相关性计算,显著降低长文本的计算开销
- 🧠 高效学习算法:
- 模型风洞 2.0 -- 高效 Predictable Scaling:引入下游任务的 Scaling 预测方法,实现更精准的模型训练配置搜索
- BitCPM -- 极致的三值量化:将模型参数位宽压缩至 3 值,实现模型位宽 90% 的极致瘦身
- 高效训练工程优化:采用 FP8 低精度计算技术,结合多词元预测(Multi-token Prediction)训练策略
- 📚 高知识密度训练数据:
- UltraClean -- 高质量预训练数据的清洗与合成:构建基于高效验证的迭代式数据清洗策略,开源高质量中英文预训练数据集 [UltraFineweb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb)
- UltraChat v2 -- 高质量有监督微调数据合成:构建大规模高质量有监督微调数据集,涵盖知识密集型数据、推理密集型数据、指令遵循数据、长文本理解数据、工具调用数据等多个维度
- ⚡ 高效推理系统:
- CPM.cu -- 轻量级的高效CUDA推理框架:融合了稀疏注意力机制、模型量化与投机采样,充分体现MiniCPM4的效率优势
- ArkInfer -- 跨平台部署系统:支持多后端环境的一键部署,提供灵活的跨平台适配能力
### 评测结果
#### 效率评测
在 Jetson AGX Orin 和 RTX 4090 两款典型端侧芯片上,MiniCPM4 在长文本处理任务中展现出大幅领先同尺寸模型的处理速度。随着文本长度的增加,MiniCPM4 的性能优势愈发显著。在 Jetson AGX Orin 平台上,相较于 Qwen3-8B,MiniCPM4 实现了约 7 倍的生成速度提升。
![benchmark](./assets/minicpm4/efficiency.png)
#### 综合评测
MiniCPM4 推出端侧 8B、0.5B 两种参数规模版本,均在同级别模型中实现了最佳性能表现。
![benchmark](./assets/minicpm4/benchmark.png)
#### 长文本评测
MiniCPM4 基于 32K 长文本进行预训练,并通过 YaRN 技术实现长度扩展。在 128K 长文本的大海捞针任务中,MiniCPM4 展现出卓越的性能表现。
![long-niah](./assets/minicpm4/128k-niah.png)
### BitCPM4: 模型量化
BitCPM4 是基于 MiniCPM 系列模型进行量化感知训练(QAT)后得到的三值量化模型,在训练效率和模型参数效率实现了有效的提升。
- 训练方法改进
- 在小规模模型上进行风洞实验,搜索训练所需的训练超参。
- 通过使用一阶段高精训练+二阶段 QAT 的方法,充分利用已经完成或部分完成训练的高精度模型,极大地压缩了 QAT 阶段所需要的算力。
- 高效参数效率
- 模型使用 1.58Bit 的位宽达到的性能对标与同参数量级别的全精度模型,模型参数效率高。
#### BitCPM4 评测
BitCPM4 在测试中的表现可以对标同级别的业界主流全精度模型。
![bitcpm-benchmark](./assets/minicpm4/bitcpm4-benchmark.png)
#### BitCPM4 模型推理
BitCPM4 开源的模型参数为伪量化形式,可以直接使用 Huggingface 框架进行推理。
### 模型应用
#### MiniCPM4-Survey: 综述生成
MiniCPM4-Survey 是由 [THUNLP](https://nlp.csai.tsinghua.edu.cn)、中国人民大学和 [ModelBest](https://modelbest.cn/en) 联合开发的开源大语言模型智能体。它基于 MiniCPM4-8B 基座模型,接受用户质量作为输入,自主生成可信的长篇综述论文。
主要特性包括:
- 计划-检索-写作生成框架 — 我们提出了一个多智能体生成框架,包含三个核心阶段:计划(定义综述的整体结构)、检索(生成合适的检索关键词)和写作(利用检索到的信息,生成连贯的段落)。
- 高质量数据集构建——我们收集并处理大量人类专家写作的综述论文,构建高质量训练集。同时,我们收集大量研究论文,构建检索数据库。
- 多方面奖励设计 — 我们精心设计了包含结构、内容和引用的奖励,用于评估综述的质量,在强化学习训练阶段作奖励函数。
- 多步强化学习训练策略 — 我们提出了一个上下文管理器,以确保在促进有效推理的同时保留必要的信息,并构建了并行环境,维持强化学习训练高效。
##### 使用与演示案例
详见[此处](./demo/minicpm4/SurveyGeneration/README.md)
##### 评估
| Method | Relevance | Coverage | Depth | Novelty | Avg. | Fact Score |
|---------------------------------------------|-----------|----------|-------|---------|-------|------------|
| Naive RAG (driven by G2FT) | 3.25 | 2.95 | 3.35 | 2.60 | 3.04 | 43.68 |
| AutoSurvey (driven by G2FT) | 3.10 | 3.25 | 3.15 | **3.15**| 3.16 | 46.56 |
| Webthinker (driven by WTR1-7B) | 3.30 | 3.00 | 2.75 | 2.50 | 2.89 | -- |
| Webthinker (driven by QwQ-32B) | 3.40 | 3.30 | 3.30 | 2.50 | 3.13 | -- |
| OpenAI Deep Research (driven by GPT-4o) | 3.50 |**3.95** | 3.55 | 3.00 | **3.50** | -- |
| MiniCPM4-Survey | 3.45 | 3.70 | **3.85** | 3.00 | **3.50** | **68.73** |
| &nbsp;&nbsp;&nbsp;*w/o* RL | **3.55** | 3.35 | 3.30 | 2.25 | 3.11 | 50.24 |
*GPT-4o 对综述生成系统的性能比较。“G2FT” 代表 Gemini-2.0-Flash-Thinking,“WTR1-7B” 代表 Webthinker-R1-7B。由于 Webthinker 不包括引用功能,OpenAI Deep Research 在导出结果时不提供引用,因此省略了对它们的 FactScore 评估。我们的技术报告中包含评测的详细信息。*
#### MiniCPM4-MCP: MCP增强的工具调用
MiniCPM4-MCP 是由[清华大学自然语言处理实验室(THUNLP)](https://nlp.csai.tsinghua.edu.cn)、中国人民大学与 [ModelBest](https://modelbest.cn/en) 联合开发的开源本地大语言模型代理,它基于 MiniCPM-4-8B,拥有 80 亿参数。它能够通过 MCP 协议与各种工具和数据资源交互,解决多种真实世界任务。截至目前,MiniCPM4-MCP 已支持:
- 涵盖 16 个 MCP 服务器(servers)中工具的使用:这些服务器所包含的工具横跨了办公类、生活类、通讯类、资讯类、工作管理类等.
- 单工具使用的能力:可使用符合 MCP 协议的工具进行单一工具的一步或多步调用。
- 跨工具组合使用的能力:可组合使用符合 MCP 协议的不同工具。
##### 使用与演示案例
详见[此处](./demo/minicpm4/MCP/README.md)
##### 评估
| MCP 服务器 | | gpt-4o | | | qwen3 | | | minicpm4 | |
| -------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| | 函数名正确率 | 参数名正确率 | 数值正确率 | 函数名正确率 | 参数名正确率 | 数值正确率 | 函数名正确率 | 参数名正确率 | 数值正确率 |
| Airbnb | 89.3 | 67.9 | 53.6 | 92.8 | 60.7 | 50.0 | 96.4 | 67.9 | 50.0 |
| Amap-Maps | 79.8 | 77.5 | 50.0 | 74.4 | 72.0 | 41.0 | 89.3 | 85.7 | 39.9 |
| Arxiv-MCP-Server | 85.7 | 85.7 | 85.7 | 81.8 | 54.5 | 50.0 | 57.1 | 57.1 | 52.4 |
| Calculator | 100.0 | 100.0 | 20.0 | 80.0 | 80.0 | 13.3 | 100.0 | 100.0 | 6.67 |
| Computor-Control-MCP | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 86.7 |
| Desktop-Commander | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
| Filesystem | 63.5 | 63.5 | 31.3 | 69.7 | 69.7 | 26.0 | 83.3 | 83.3 | 42.7 |
|Github | 92.0 | 80.0 | 58.0 | 80.5 | 50.0 | 27.7 | 62.8 | 25.7 | 17.1 |
| Gaode | 71.1 | 55.6 | 17.8 | 68.8 | 46.6 | 24.4 | 68.9 | 46.7 | 15.6 |
| MCP-Code-Executor | 85.0 | 80.0 | 70.0 | 80.0 | 80.0 | 70.0 | 90.0 | 90.0 | 65.0 |
| MCP-Docx | 95.8 | 86.7 | 67.1 | 94.9 | 81.6 | 60.1 | 95.1 | 86.6 | 76.1 |
| PPT | 72.6 | 49.8 | 40.9 | 85.9 | 50.7 | 37.5 | 91.2 | 72.1 | 56.7 |
| PPTx | 64.2 | 53.7 | 13.4 | 91.0 | 68.6 | 20.9 | 91.0 | 58.2 | 26.9 |
| Simple-Time-Server | 90.0 | 70.0 | 70.0 | 90.0 | 90.0 | 90.0 | 90.0 | 60.0 | 60.0 |
| Slack | 100.0 | 90.0 | 70.0 | 100.0 | 100.0 | 65.0 | 100.0 | 100.0 | 100.0 |
| Whisper | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 90.0 | 30.0 |
| **平均值** | **80.2** | **70.2** | **49.1** | **83.5** | **67.7** | **43.8** | **88.3** | **76.1** | **51.2** |
#### MiniCPM Intel AIPC Client: 端侧大模型客户端
MiniCPM Intel AIPC Client 是面壁智能和 Intel 合作推出的端侧大模型客户端,专为搭载 Intel Core Ultra 系列处理器的设备设计,旨在为开发者、研究人员与 AI 爱好者带来低延迟、高效率、高隐私的本地大模型使用体验。其核心特性如下:
- 深度适配 Intel 硬件:全面支持 Intel Core Ultra 系列处理器,实现与硬件的深度融合,充分释放硬件性能,让用户无需依赖云端,在本地设备上就能流畅运行大模型。
- 基于 OpenVINO 的极致优化:基于 OpenVINO 推理框架进行深度优化,大幅提升推理效率,推理速度最高可达每秒 80 tokens,确保模型响应迅速,无论是快速问答还是复杂任务处理,都能高效完成。
- 隐私安全保障:采用本地部署方式,所有数据处理均在本地设备完成,避免数据上传至云端带来的隐私风险,让用户使用更安心,尤其适合对数据隐私要求较高的场景。
- 面向多元用户群体:无论是追求前沿技术的开发者,专注学术研究的科研人员,还是热衷于探索 AI 应用的爱好者,都能通过 MiniCPM Intel AIPC Client,轻松体验本地大模型的强大功能,开启个性化的 AI 探索之旅 。
配置要求:
- 建议使用英特尔酷睿 ultra7 及以上移动端处理器
- 建议运行内存 32GB 及以上
应用下载:
[下载地址](https://github.com/OpenBMB/MiniCPM/releases/tag/2.4.2)
### 模型推理
#### CPM.cu
我们**推荐**使用 [CPM.cu](https://github.com/OpenBMB/CPM.cu) 对 MiniCPM4 模型进行推理。CPM.cu 是面壁开发的一个集合了高效稀疏、投机采样、量化等技术的 CUDA 推理框架,能够完全发挥 MiniCPM4 的效率优势。
你可以通过以下脚本安装 CPM.cu 并进行推理:
```bash
git clone https://github.com/OpenBMB/CPM.cu.git --recursive
cd CPM.cu
python3 setup.py install
```
你可以通过以下命令进行推理并查看模型的运行速度。
```bash
python3 tests/long_prompt_gen.py # 生成 prompt.txt
python3 tests/test_generate.py --prompt-file prompt.txt
```
更多关于 CPM.cu 的细节,请参考 [CPM.cu 仓库](https://github.com/OpenBMB/CPM.cu)
#### HuggingFace
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM4-8B'
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)
# User can directly use the chat interface
# responds, history = model.chat(tokenizer, "Write an article about Artificial Intelligence.", temperature=0.7, top_p=0.7)
# print(responds)
# User can also use the generate interface
messages = [
{"role": "user", "content": "Write an article about Artificial Intelligence."},
]
prompt_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([prompt_text], return_tensors="pt").to(device)
model_outputs = model.generate(
**model_inputs,
max_new_tokens=1024,
top_p=0.7,
temperature=0.7
)
output_token_ids = [
model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs['input_ids']))
]
responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
print(responses)
```
本模型支持稀疏注意力机制 InfLLM v2,可高效处理长序列推理。如需启用该功能,请先安装依赖库 [infllmv2_cuda_impl](https://github.com/OpenBMB/infllmv2_cuda_impl)
运行以下命令即可安装:
```bash
git clone -b feature_infer https://github.com/OpenBMB/infllmv2_cuda_impl.git
cd infllmv2_cuda_impl
git submodule update --init --recursive
pip install -e . # or python setup.py install
```
启用 InfLLM v2 需在 `config.json` 配置文件中添加 `sparse_config` 字段:
```json
{
...,
"sparse_config": {
"kernel_size": 32,
"kernel_stride": 16,
"init_blocks": 1,
"block_size": 64,
"window_size": 2048,
"topk": 64,
"use_nope": false,
"dense_len": 8192
}
}
```
这些参数控制 InfLLM v2 的行为:
* `kernel_size`(默认值:32):语义核的大小。
* `kernel_stride`(默认值:16):相邻语义核的步长。
* `init_blocks`(默认值:1):每个 query token 关注的初始的块数量,用于确保关注序列开头部分。
* `block_size`(默认值:64):key-value blocks 的块大小。
* `window_size`(默认值:2048):局部滑动窗口大小。
* `topk`(默认值:64):每个 token 仅与最相关的 top-k 个 key-value blocks 计算注意力。
* `use_nope`(默认值:false):是否在块选择中使用NOPE技术以提升性能。
* `dense_len`(默认值:8192):稀疏注意力对短序列收益有限,当 token 长度低于此阈值时自动切换为标准注意力。设为 `-1` 则强制始终使用稀疏注意力。
Minicpm4 原生支持 32,768 tokens 的上下文长度。若对话总长度(输入 + 输出)远超此限制,建议通过 RoPE 缩放技术扩展上下文。我们已验证通过调整 LongRoPE 因子,模型可稳定支持 131,072 tokens 的超长上下文。
修改方法:在 `config.json` 文件中调整 `rope_scaling` 字段参数即可。
```json
{
...,
"rope_scaling": {
"rope_type": "longrope",
"long_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"short_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"original_max_position_embeddings": 32768
}
}
```
#### vLLM
* 安装
参照 vLLM [官方仓库](https://github.com/vllm-project/vllm),通过*源码*安装最新版本。
```
pip install -U vllm \
--pre \
--extra-index-url https://wheels.vllm.ai/nightly
```
* 使用 vLLM 推理 MiniCPM4-8B 模型:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "openbmb/MiniCPM4-8B"
prompt = [{"role": "user", "content": "推荐5个北京的景点。"}]
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
llm = LLM(
model=model_name,
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
)
sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024, repetition_penalty=1.02)
outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
* 在 vLLM 中使用 Eagle 投机解码:只需如下初始化推理引擎
```python
llm = LLM(
model=model_name,
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
speculative_config={
"method": "eagle",
"model": "openbmb/MiniCPM4-8B-Eagle-vLLM",
"num_speculative_tokens": 2,
"max_model_len": 32768,
},
)
```
* 在 vLLM 中推理量化后的 MiniCPM4-8B:只需如下初始化推理引擎
```python
llm = LLM(
model="openbmb/MiniCPM4-8B-marlin-vLLM",
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
)
```
* 在 vLLM 中使用 Eagle 投机解码推理量化后的 MiniCPM4-8B:只需如下初始化推理引擎
```python
llm = LLM(
model="openbmb/MiniCPM4-8B-marlin-vLLM",
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
speculative_config={
"method": "eagle",
"model": "openbmb/MiniCPM4-8B-marlin-Eagle-vLLM",
"num_speculative_tokens": 2,
"max_model_len": 32768,
},
)
```
> **注意**:如果你使用 vLLM 中的 OpenAI 兼容的服务端,`chat` API 默认会将 `add_special_tokens` 设置为 `False`。这会导致缺失一些特殊标记(例如,BOS),而这些标记对 **MiniCPM4** 模型至关重要。为确保模型行为正常,你需要在 API 调用中显式设置 `extra_body={"add_special_tokens": True}`,如下所示:
```python
import openai
client = openai.Client(base_url="http://localhost:8000/v1", api_key="EMPTY")
response = client.chat.completions.create(
model="openbmb/MiniCPM4-8B",
messages=[
{"role": "user", "content": "Write an article about Artificial Intelligence."},
],
temperature=0.7,
max_tokens=1024,
extra_body={"add_special_tokens": True}, # 确保添加了诸如 BOS 等特殊标记
)
print(response.choices[0].message.content)
```
#### SGLang
* 安装
参考 SGLang [官方仓库](ttps://github.com/sgl-project/sglang),通过*源码*安装。
```
git clone -b openbmb https://github.com/OpenBMB/sglang.git
cd sglang
pip install --upgrade pip
pip install -e "python[all]"
```
* 启动推理服务
```shell
python -m sglang.launch_server --model openbmb/MiniCPM4-8B --trust-remote-code --port 30000 --chat-template chatml
```
* 然后用户可以通过运行以下命令来使用聊天界面:
```python
import openai
client = openai.Client(base_url=f"http://localhost:30000/v1", api_key="None")
response = client.chat.completions.create(
model="openbmb/MiniCPM4-8B",
messages=[
{"role": "user", "content": "Write an article about Artificial Intelligence."},
],
temperature=0.7,
max_tokens=1024,
)
print(response.choices[0].message.content)
```
* 使用投机加速
```shell
python3 -m sglang.launch_server --model-path [model] \
--speculative_draft_model_path [draft_model] \
--host 0.0.0.0 --trust-remote-code \
--speculative-algorithm EAGLE --speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2 \
--mem-fraction 0.5
```
### 模型微调
#### LLaMA-Factory
目前模型微调支持 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory),使用方法参考 [LLaMA-Factory 微调](https://t0mvtyikswc.feishu.cn/docx/Gv6ld1yCTodckBxysKgcpepJnKg?from=from_copylink)
## MiniCPM 3.0
<details>
<summary>查看 MiniCPM 3.0 的详细信息</summary>
MiniCPM 3.0 是一个 4B 参数量的语言模型,相比 MiniCPM1.0/2.0,功能更加全面,综合能力大幅提升,多数评测集上的效果比肩甚至超越众多 7B-9B 模型。
* **支持工具调用🛠️(Function Calling)和代码解释器💻(Code Interpreter)**[Berkeley Function Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html) 上取得 9B 规模以下 SOTA,超越 GLM-4-9B-Chat、Qwen2-7B-Instruct。
* **超强的推理能力🧮**:数学能力方面,[MathBench](https://open-compass.github.io/MathBench/) 上的效果超越 GPT-3.5-Turbo 以及多个 7B-9B 模型。在非常具有挑战性的 [LiveCodeBench](https://livecodebench.github.io/) 上,效果超越 Llama3.1-8B-Instruct。
* **出色的中英文指令遵循能力🤖**:英文指令遵循 [IFEval](https://huggingface.co/datasets/google/IFEval)、中文指令遵循 [FollowBench-zh](https://huggingface.co/datasets/YuxinJiang/FollowBench) 效果超越 GLM-4-9B-Chat、Qwen2-7B-Instruct。
* **长文本能力**:原生支持 32k 上下文长度,32k 长度内大海捞针全绿。提出 [LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce) ,理论可处理的上下文长度达到 +∞,在综合性长文本评测基准 [InfiniteBench](https://github.com/OpenBMB/InfiniteBench) 平均得分超越GPT-4、KimiChat等标杆模型。
* **RAG能力**:我们发布了 [MiniCPM RAG 套件](https://huggingface.co/collections/openbmb/minicpm-rag-suite-66d976b4204cd0a4f8beaabb)。基于 MiniCPM 系列模型的 [MiniCPM-Embedding](https://huggingface.co/openbmb/MiniCPM-Embedding)[MiniCPM-Reranker](https://huggingface.co/openbmb/MiniCPM-Reranker) 在中文、中英跨语言检索测试中取得 SOTA 表现;针对 RAG 场景的 [MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA) 在开放域问答等多项任务上超越 Llama3-8B、Baichuan2-13B 等模型。
### 评测结果
#### 综合评测
<table>
<tr>
<td>评测集</td>
<td>Qwen2-7B-Instruct</td>
<td>GLM-4-9B-Chat</td>
<td>Gemma2-9B-it</td>
<td>Llama3.1-8B-Instruct</td>
<td>GPT-3.5-Turbo-0125</td>
<td>Phi-3.5-mini-Instruct(3.8B)</td>
<td>MiniCPM3-4B </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>英文能力</strong></td>
</tr>
<tr>
<td>MMLU</td>
<td>70.5</td>
<td>72.4</td>
<td>72.6</td>
<td>69.4</td>
<td>69.2</td>
<td>68.4</td>
<td>67.2 </td>
</tr>
<tr>
<td>BBH</td>
<td>64.9</td>
<td>76.3</td>
<td>65.2</td>
<td>67.8</td>
<td>70.3</td>
<td>68.6</td>
<td>70.2 </td>
</tr>
<tr>
<td>MT-Bench</td>
<td>8.41</td>
<td>8.35</td>
<td>7.88</td>
<td>8.28</td>
<td>8.17</td>
<td>8.60</td>
<td>8.41 </td>
</tr>
<tr>
<td>IFEVAL (Prompt Strict-Acc.)</td>
<td>51.0</td>
<td>64.5</td>
<td>71.9</td>
<td>71.5</td>
<td>58.8</td>
<td>49.4</td>
<td>68.4 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>中文能力</strong></td>
</tr>
<tr>
<td>CMMLU</td>
<td>80.9</td>
<td>71.5</td>
<td>59.5</td>
<td>55.8</td>
<td>54.5</td>
<td>46.9</td>
<td>73.3 </td>
</tr>
<tr>
<td>CEVAL</td>
<td>77.2</td>
<td>75.6</td>
<td>56.7</td>
<td>55.2</td>
<td>52.8</td>
<td>46.1</td>
<td>73.6 </td>
</tr>
<tr>
<td>AlignBench v1.1</td>
<td>7.10</td>
<td>6.61</td>
<td>7.10</td>
<td>5.68</td>
<td>5.82</td>
<td>5.73</td>
<td>6.74 </td>
</tr>
<tr>
<td>FollowBench-zh (SSR)</td>
<td>63.0</td>
<td>56.4</td>
<td>57.0</td>
<td>50.6</td>
<td>64.6</td>
<td>58.1</td>
<td>66.8 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>数学能力</strong></td>
</tr>
<tr>
<td>MATH</td>
<td>49.6</td>
<td>50.6</td>
<td>46.0</td>
<td>51.9</td>
<td>41.8</td>
<td>46.4</td>
<td>46.6 </td>
</tr>
<tr>
<td>GSM8K</td>
<td>82.3</td>
<td>79.6</td>
<td>79.7</td>
<td>84.5</td>
<td>76.4</td>
<td>82.7</td>
<td>81.1 </td>
</tr>
<tr>
<td>MathBench</td>
<td>63.4</td>
<td>59.4</td>
<td>45.8</td>
<td>54.3</td>
<td>48.9</td>
<td>54.9</td>
<td>65.6 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>代码能力</strong></td>
</tr>
<tr>
<td>HumanEval+</td>
<td>70.1</td>
<td>67.1</td>
<td>61.6</td>
<td>62.8</td>
<td>66.5</td>
<td>68.9</td>
<td>68.3 </td>
</tr>
<tr>
<td>MBPP+</td>
<td>57.1</td>
<td>62.2</td>
<td>64.3</td>
<td>55.3</td>
<td>71.4</td>
<td>55.8</td>
<td>63.2 </td>
</tr>
<tr>
<td>LiveCodeBench v3</td>
<td>22.2</td>
<td>20.2</td>
<td>19.2</td>
<td>20.4</td>
<td>24.0</td>
<td>19.6</td>
<td>22.6 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>工具调用能力</strong></td>
</tr>
<tr>
<td>BFCL v2</td>
<td>71.6</td>
<td>70.1</td>
<td>19.2</td>
<td>73.3</td>
<td>75.4</td>
<td>48.4</td>
<td>76.0 </td>
</tr>
<tr>
<td colspan="15" align="left"><strong>综合能力</strong></td>
</tr>
<tr>
<td>平均分</td>
<td>65.3</td>
<td>65.0</td>
<td>57.9</td>
<td>60.8</td>
<td>61.0</td>
<td>57.2</td>
<td><strong>66.3</strong></td>
</tr>
</table>
#### 工具调用能力
我们在 [Berkeley Function Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html) 上测试了模型的工具调用能力,MiniCPM3-4B 在该榜单上的表现超越了多个 7B-9B 参数量的模型,优于 GPT-3.5-Turbo-0125。
<table>
<tr>
<td>模型</td>
<td>总体准确率</td>
<td>AST Summary</td>
<td>Exec Summary</td>
<td>Irrelevance Detection</td>
<td>Relevance Detection </td>
</tr>
<tr>
<td>MiniCPM3-4B</td>
<td>76.03%</td>
<td>68.55%</td>
<td>85.54%</td>
<td>53.71%</td>
<td>90.24% </td>
</tr>
<tr>
<td>Llama3.1-8B-Instruct</td>
<td>73.28%</td>
<td>64.61%</td>
<td>86.48%</td>
<td>43.12%</td>
<td>85.37% </td>
</tr>
<tr>
<td>Qwen2-7B-Instruct</td>
<td>71.61%</td>
<td>65.71%</td>
<td>79.57%</td>
<td>44.70%</td>
<td>90.24% </td>
</tr>
<tr>
<td>GLM-4-9B-Chat</td>
<td>70.08%</td>
<td>60.69%</td>
<td>80.02%</td>
<td>55.02%</td>
<td>82.93% </td>
</tr>
<tr>
<td>Phi-3.5-mini-instruct</td>
<td>48.44%</td>
<td>38.89%</td>
<td>54.04%</td>
<td>46.78%</td>
<td>65.85% </td>
</tr>
<tr>
<td>Gemma2-9B-it</td>
<td>19.18%</td>
<td>5.41%</td>
<td>18.50%</td>
<td>88.88%</td>
<td>7.32%</td>
</tr>
</table>
#### 长文本能力
在 32k 的上下文长度进行[大海捞针](https://github.com/gkamradt/LLMTest_NeedleInAHaystack)测试,结果如下图:
![needle](assets/minicpm3/eval_needle.jpeg)
同时我们提出[LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce),利用分治的策略,理论上可以处理无限长度的文本。我们在[InfiniteBench](https://github.com/OpenBMB/InfiniteBench)上测试了模型的长文本处理能力,在LLMxMapReduce框架的加持下,MiniCPM3-4B在这个榜单的平均得分能够超越 GPT-4、KimiChat 等标杆模型。
| | Context length| Qwen2-70b | Kimi-Chat(2024.06) | GPT-4 (From InfiniteBench) | MiniCPM 3.0 x MR | Qwen2-70b x MR | Llama3-70bx MR |
| ----------------------------- | ---------- | --------- | ------------------ | -------------------------- | --------------- | ------------ | ------------- |
| Math.Find | 87.9k | 59.71% | 18.57% | 60.00% | 83.43% | 54.29% | **91.43%** |
| Retrieve.KV | 89.9k | 29.00% | 69.20% | 89.00% | 93.80% | 98.80% | **98.89%** |
| En.Dia | 103.6K | 23.00% | 23.00% | 7.50% | 12.50% | **46.50%** | 17.50% |
| Code.Debug | 114.7k | 45.43% | 38.32% | 54.31% | 25.63% | 54.82% | **62.94%** |
| Retrieve.Number | 122.4k | **100.00%** | 97.45% | **100.00%** | 99.32% | **100.00%** | 99.79% |
| Retrieve.PassKey | 122.4k | **100.00%** | 99.32% | **100.00%** | 98.81% | **100.00%** | **100.00%** |
| En.Sum | 171.5K | 31.85% | 29.94% | 14.73% | 25.89% | **32.39%** | 30.63% |
| En.MC | 184.4k | 81.66% | 79.91% | 68.12% | 66.38% |**83.84%** | 82.10% |
| En.QA | 192.6k | 21.97% | 18.80% | 22.44% | 28.39% | 23.13% | **34.70%** |
| Zh.QA | 2068.6k | 21.40% | 19.84% | **25.96%** | 23.66% | 19.10% | N/A |
| avg w/o Zh.QA | / | 51.92% | 52.96% | 55.33% | 59.29% | 64.98% | **68.64%** |
| avg | / | 48.86% | 49.65% | 52.39% | 55.55% | **60.39%** | N/A |
### 模型推理
#### Huggingface
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM3-4B'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "请写一篇关于人工智能的文章,详细介绍人工智能的未来发展和隐患。", temperature=0.7, top_p=0.7)
print(responds)
```
#### SGLang(推荐)
* 安装
参考 SGLang [官方仓库](ttps://github.com/sgl-project/sglang),通过*源码*安装最新版本。
* 启动推理服务
```shell
python -m sglang.launch_server --model openbmb/MiniCPM3-4B --trust-remote-code --port 30000 --chat-template chatml
```
* 使用示例
```python
from sglang import function, system, user, assistant, gen, set_default_backend, RuntimeEndpoint
@function
def multi_turn_question(s, question_1, question_2):
s += user(question_1)
s += assistant(gen("answer_1", max_tokens=1024))
s += user(question_2)
s += assistant(gen("answer_2", max_tokens=1024))
set_default_backend(RuntimeEndpoint("http://localhost:30000"))
state = multi_turn_question.run(
question_1="介绍一下人工智能",
question_2="写一篇关于它的文章",
)
for m in state.messages():
print(m["role"], ":", m["content"])
```
#### vLLM
* 安装 vllm
```shell
pip install "vllm>=0.6.2"
```
* 推理
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "openbmb/MiniCPM3-4B"
prompt = [{"role": "user", "content": "请写一篇关于人工智能的文章,详细介绍人工智能的未来发展和隐患。"}]
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
llm = LLM(model=model_name,
trust_remote_code=True,
tensor_parallel_size=1
)
sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024)
outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
#### llama.cpp
我们提供了 MiniCPM3 的 [GGUF 版本](https://huggingface.co/openbmb/MiniCPM3-4B-GGUF),可以直接使用 llama.cpp 推理。
* 安装 llama.cpp
```shell
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
```
* 推理
```shell
./llama-cli -c 1024 -m minicpm3-4b-fp16.gguf -n 1024 --top-p 0.7 --temp 0.7 --prompt "<|im_start|>user\n请写一篇关于人工智能的文章,详细介绍人工智能的未来发展和隐患。<|im_end|>\n<|im_start|>assistant\n"
```
### 模型微调
#### LLaMA-Factory
目前模型微调支持 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory),使用方法参考 [LLaMA-Factory 微调](https://modelbest.feishu.cn/docx/Z7USdW4lloZzkZxQ14icJ3senjb?from=from_copylink)
### 进阶功能
对于以下进阶功能,我们的样例代码中使用 [vLLM](#vllm) 进行推理。
#### 工具调用
我们提供了使用 MiniCPM3 调用工具的示例代码:
```bash
cd demo/minicpm3/function_call
python function_call.py
```
如果你想启动一个能够调用工具的推理服务,使用以下代码:
```bash
cd demo/minicpm3/function_call
pip install -r requirements.txt
python openai_api_server.py \
--model openbmb/MiniCPM3-4B \
--served-model-name MiniCPM3-4B \
--chat-template chatml.jinja \
--dtype auto \
--api-key token-abc123 \
--tensor-parallel-size 1 \
--trust-remote-code
```
下面是一个调用搜索工具回答问题的演示:
![function_call](./assets/minicpm3/function_call.gif)
#### 代码解释器
我们提供了一个 MiniCPM3 使用代码解释器的示例代码:
```bash
cd demo/minicpm3/code_interpreter
pip install -r requirements.txt
python code_interpreter.py openbmb/MiniCPM3-4B
```
下面是一个使用代码解释器生成二维码的演示:
![code_interpreter](./assets/minicpm3/code_interpreter.gif)
</details>
## MiniCPM 2.0
<details>
<summary>查看 MiniCPM 2.0 的详细信息</summary>
MiniCPM 2.0 系列模型对 MiniCPM 进行了多个维度的升级,包括以下模型版本:
- MiniCPM-2B-128k:将 MiniCPM-2B 的上下文长度从 4k 扩展至 128k,在 InfiniteBench 测试集上优于 ChatGLM3-6B-128k、Yi-6B-200k 等更大参数量的模型。
- MiniCPM-MoE-8x2B:基于 MiniCPM-2B 进行 MoE 扩展,综合表现相比于 MiniCPM-2B 平均提高 4.5 个百分点。
- MiniCPM-1B:相比于 MiniCPM-2B 成本下降 60%,综合表现仍然优于 LLaMA2-13B。
- MiniCPM-S-1B:在保持下游任务性能无损的前提下,FFN 层实现了 87.89% 的平均稀疏度,将 FFN FLOPs 降低了 84%。结合 PowerInfer 推理框架,解码速度提升约 2.8 倍。
### 评测结果
#### MiniCPM-2B-128k 模型评测
| Model | avg | avg w/o code&math | passkey | number_string | kv_retrieval | longbook_choice_eng | longbook_qa_chn | longbook_qa_eng | longbook_sum_eng | longdialogue_qa_eng | math_calc | math_find | code_debug | code_run |
|-------------------------------------|-------|-------------------|---------|---------------|--------------|---------------------|-----------------|-----------------|------------------|---------------------|-----------|-----------|------------|----------|
| LWM-Text-128k | 24.45 | 33.62 | 100 | 97.8 | 0.6 | 28.82 | 15.93 | 14.31 | 9.99 | 1.5 | 0 | 3.43 | 20.05 | 1 |
| Yarn-Mistral-7b-128k | 19.84 | 27.36 | 92.71 | | 0 | 27.95 | 15.49 | 9.55 | 9.06 | 7.5 | 0 | 17.14 | 0.76 | 1.25 |
| Mistral-7B-Instruct-v0.2(ABF 1000w) | 27.75 | 36.9 | 100 | 78.98 | 3.6 | 37.12 | 11.74 | 17.37 | 21.12 | 9.5 | 0 | 29.43 | 17.51 | 0 |
| Yi-6B-200k | 22.15 | 32.54 | 100 | 94.92 | 0 | 36.68 | 15.07 | 9.2 | 0.92 | 3.5 | 0 | 4.29 | 0.51 | 0.75 |
| chatglm3-6b-128k | 25.58 | 36.57 | 89.93 | 99.66 | 5.2 | 46.29 | 10.7 | 8.38 | 25.91 | 6.5 | 0 | 8 | 5.33 | 1 |
| MiniCPM-2.4B-128k | 27.32 | 37.68 | 98.31 | 99.83 | 9 | 29.69 | 23.06 | 16.33 | 15.73 | 9.5 | 0 | 4.29 | 22.08 | 0 |
#### MiniCPM-MoE-8x2B 模型评测
<div align="left">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th nowrap="nowrap" >BBH</th>
<th nowrap="nowrap" >MMLU</th>
<th nowrap="nowrap" >CEval</th>
<th nowrap="nowrap" >CMMLU</th>
<th nowrap="nowrap" >HumanEval</th>
<th nowrap="nowrap" >MBPP&dagger;</th>
<th nowrap="nowrap" >GSM8K</th>
<th nowrap="nowrap" >MATH</th
</tr>
</thead>
<tbody align="center">
<tr>
<td nowrap="nowrap" align="left">Llama2-34B*</td>
<td>44.1</td>
<td>62.6</td>
<td>-</td>
<td>-</td>
<td>22.6</td>
<td>33.0</td>
<td>42.2</td>
<td>6.24</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Mistral-7B-Instruct-v0.2</td>
<td>39.81</td>
<td>60.51</td>
<td>42.55</td>
<td>41.92</td>
<td>36.59</td>
<td>39.63</td>
<td>40.49</td>
<td>4.95</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Gemma-7B*</td>
<td>55.1</td>
<td>64.3</td>
<td>-</td>
<td>-</td>
<td>32.3</td>
<td>44.4</td>
<td>46.4</td>
<td>24.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen1.5-7B*</td>
<td>40.2</td>
<td>61</td>
<td>74.1</td>
<td>73.1</td>
<td>36</td>
<td>37.4</td>
<td>62.5</td>
<td>20.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Deepseek-MoE(16B)*</td>
<td>-</td>
<td>45.0</td>
<td>40.6</td>
<td>42.5</td>
<td>26.8</td>
<td>39.2</td>
<td>18.8</td>
<td>4.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>MiniCPM-2.4B</b></td>
<td>36.87</td>
<td>53.46</td>
<td>51.13</td>
<td>51.07</td>
<td>50.00</td>
<td>35.93</td>
<td>53.83</td>
<td>10.24</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>MiniCPM-MoE-8x2B</b></td>
<td>39.22</td>
<td>58.90</td>
<td>58.11</td>
<td>58.80</td>
<td>55.49</td>
<td>41.68</td>
<td>61.56</td>
<td>10.52</td>
</tr>
</tbody>
</table>
</div>
注:* 表示结果取自技术报告。&dagger; 表示评测集为MBPP全集。
#### MiniCPM-S-1B 评测结果
- 代码生成:在 HumanEval(0-shot)和 MBPP(3-shot)上的平均 pass@1 得分。
- 常识推理:在 PIQA、SIQA、HellaSwag、WinoGrande 和 COPA 上的平均 0-shot 准确率。
- 阅读理解:在 BoolQ、LAMBADA 和 TyDi QA 上的平均 0-shot 准确率。
其他测试集:我们报告在GSM8K(8-shot)、MMLU(5-shot)、BBH(3-shot)和 AGI-Eval(0-shot)上的平均准确率。
| Setting | Average<br>Sparsity | Average<br>Performance | Code<br>Generation | Commonsense<br>Reasoning | Reading<br>Comprehension | GSM8K | MMLU | BBH | AGI Eval |
| :-------------------: | :----------------: | :----------------------: | :----------------------: | :---: | :---: | :---: | :---------: | :-----: | :-----------------: |
| LLaMA2-7B | - | 37.96 | 16.37 | 69.59 | 61.87 | 12.96 | 44.45 | 32.96 | 27.53 |
| ReluLLaMA-7B | 66.98 | 37.62 | 15.85 | 69.64 | 70.54 | 5.84 | 38.64 | 35.07 | 27.73 |
| **ProSparse-7B**\* | 88.11 | 38.31 | 19.47 | 66.29 | 63.33 | 12.74 | 45.21 | 33.59 | 27.55 |
| **ProSparse-7B** | **89.32** | **38.46** | 19.42 | 66.27 | 63.50 | 12.13 | 45.48 | 34.99 | 27.46 |
| LLaMA2-13B | - | 44.06 | 20.19 | 72.58 | 71.55 | 22.21 | 54.69 | 37.89 | 29.33 |
| ReluLLaMA-13B | 71.56 | 42.74 | 20.19 | 70.44 | 73.29 | 18.50 | 50.58 | 37.97 | 28.22 |
| **ProSparse-13B**\* | 87.97 | **45.07** | 29.03 | 69.75 | 67.54 | 25.40 | 54.78 | 40.20 | 28.76 |
| **ProSparse-13B** | **88.80** | 44.90 | 28.42 | 69.76 | 66.91 | 26.31 | 54.35 | 39.90 | 28.67 |
| MiniCPM-1B | - | 44.44 | 36.85 | 63.67 | 60.90 | 35.48 | 50.44 | 35.03 | 28.71 |
| **MiniCPM-S-1B**\* | 86.25 | **44.72** | 41.38 | 64.55 | 60.69 | 34.72 | 49.36 | 34.04 | 28.27 |
| **MiniCPM-S-1B** | **87.89** | **44.72** | 42.04 | 64.37 | 60.73 | 34.57 | 49.51 | 34.08 | 27.77 |
注:
1. ReluLLaMA-7B 和 ReluLLaMA-13B 的下载链接分别是 [7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) and [13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B)。"ProSparse-7B\*"、"ProSparse-13B\*" 和 "MiniCPM-S-1B\*" 代表没有激活阈值偏移的 ProSparse 版本。
2. 对于 PIQA、SIQA、HellaSwag、WinoGrande、COPA、BoolQ、LAMBADA、TyDi QA 和 AGI-Eval,我们根据各个选项的 PPL 来进行答案选择。对于 GSM8K、MMLU 和 BBH,我们直接生成答案。
### 模型推理
#### HuggingFace、vLLM推理
参考 MiniCPM 1.0 中的[模型推理](#huggingface-推理)部分。
#### Powerinfer 推理
针对 MiniCPM-S-1B 模型,我们可以使用 Powerinfer 进行推理加速,使用方法如下:
1. 保证cmake版本3.17以上,如果已经安装过,则跳过此步骤
```bash
# 下载安装包
sudo wget https://cmake.org/files/v3.23/cmake-3.23.0.tar.gz
# 解压安装包
sudo tar -zxvf cmake-3.23.0.tar.gz
# 配置安装环境
sudo ./configure
sudo make -j8
# 编译安装
sudo make install
# 查看安装后版本
cmake --version
# 返回版本号则安装成功
#cmake version 3.23.0
```
2. 安装powerinfer:
```bash
git clone https://github.com/SJTU-IPADS/PowerInfer
cd PowerInfer
pip install -r requirements.txt # install Python helpers' dependencies
```
3. cpu版本powerinfer编译,如果你的机器只有cpu,或者只想使用cpu进行推理,则运行以下命令:
```bash
cmake -S . -B build
cmake --build build --config Release
```
4. gpu版本powerinfer编译,如果你的机器有gpu,则可以运行以下命令:
```bash
cmake -S . -B build -DLLAMA_CUBLAS=ON
cmake --build build --config Release
```
5. 获取稀疏模型
```bash
git clone https://huggingface.co/openbmb/MiniCPM-S-1B-sft-gguf/tree/main
#or
git clone https://modelscope.cn/models/OpenBMB/MiniCPM-S-1B-sft-gguf
```
6. 模型推理:
```bash
cd PowerInfer
# 以下是命令模版,output_token_count为最大输出tokens,thread_num 为线程数,prompt为输入prompt字符
#./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt
# 以下是示例
./build/bin/main -m /root/ld/ld_model_pretrain/1b-s-minicpm/MiniCPM-S-1B-sft.gguf -n 2048 -t 8 -p '<用户>hello,tell me a story please.<AI>'
```
</details>
## MiniCPM 1.0
<details>
<summary>查看 MiniCPM 1.0 的详细信息</summary>
MiniCPM-2B 语言模型有 24亿(2.4B)的非词嵌入参数量, 总计 2.7B 参数量。
- 经过 SFT 后,MiniCPM-2B 在公开评测集上与 Mistral-7B 表现相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
- 经过 DPO 后,MiniCPM-2B 在 MTBench 上也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。
注意:为了保证在学术研究用途上模型的通用性,我们**未对 MiniCPM-2B 进行任何身份认同训练**。同时由于我们用 ShareGPT 开源语料作为部分训练数据,模型可能会输出类似 GPT 系列模型的身份认同信息。
### 评测结果
#### 评测设置
* 由于大模型评测难以统一,且大量评测也没有公开的prompt和测试代码,对于具体评测方式,我们只能尽量做到适合各类模型。
* 整体而言,我们测试时采用统一的prompt输入,并按照各模型对应的模板进行输入调整。
* **评测脚本及prompt已开源在我们的Github仓库中,也欢迎更多开发者来不断改进我们的评测方式。**
* 文本评测部分,采用了我们的开源大模型能力评测框架[UltraEval](https://github.com/OpenBMB/UltraEval)。以下为开源模型复现流程:
* 安装UltraEval
```shell
git clone https://github.com/OpenBMB/UltraEval.git
cd UltraEval
pip install -e .
```
* 下载相关数据并解压处理
```shell
wget -O RawData.zip "https://cloud.tsinghua.edu.cn/f/71b5232264ae4833a4d0/?dl=1"
unzip RawData.zip
python data_process.py
```
* 执行评测脚本(提供了模板,可自定义)
```shell
bash run_eval.sh
```
#### 部署模式
* 因为MiniCPM采用Mup的结构,与现有模型在具体计算上有细微差别,我们是基于vllm=0.2.2版本进行了我们模型的实现。
* **对于非MiniCPM模型,我们采用了vllm=0.2.7的最新版本进行推理。**
#### 评测度量
* 对于QA任务(选择题任务),我们选用两种方式进行测试:
* PPL:将选项作为题目生成的延续,并根据各个选项的PPL来进行答案选择;
* 第二种是直接生成答案选项。
* 对于不同模型,这两种方式得到的结果差异较大。MiniCPM两种模式上的结果较为接近,而Mistral-7B-v0.1等模型在PPL上表现较好,直接生成上效果较差。
* 在具体评测时,我们以两种评测方式得分的最高者为最终结果,以此保证对比的公平性(以下表格中*号表示采用PPL)。
#### 文本模型评测
**越级比较:**
|模型|平均分|英文均分|中文均分|C-Eval|CMMLU|MMLU|HumanEval|MBPP|GSM8K|MATH|BBH|ARC-E|ARC-C|HellaSwag|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Llama2-7B|35.40|36.21|31.765|32.42|31.11|44.32|12.2|27.17|13.57|1.8|33.23|75.25|42.75|75.62*|
|Qwen-7B|49.46|47.19|59.655|58.96|60.35|57.65|17.07|42.15|41.24|5.34|37.75|83.42|64.76|75.32*|
|Deepseek-7B|39.96|39.15|43.64|42.82|44.45|47.82|20.12|41.45|15.85|1.53|33.38|74.58*|42.15*|75.45*|
|Mistral-7B|48.97|49.96|44.54|46.12|42.96|62.69|27.44|45.2|33.13|5.0|41.06|83.92|70.73|80.43*|
|Llama2-13B|41.48|42.44|37.19|37.32|37.06|54.71|17.07|32.55|21.15|2.25|37.92|78.87*|58.19|79.23*|
|MPT-30B|38.17|39.82|30.72|29.34|32.09|46.56|21.95|35.36|10.31|1.56|38.22|78.66*|46.08*|79.72*|
|Falcon-40B|43.62|44.21|40.93|40.29|41.57|53.53|24.39|36.53|22.44|1.92|36.24|81.94*|57.68|83.26*|
|MiniCPM-2B|52.33|52.6|51.1|51.13|51.07|53.46|50.00|47.31|53.83|10.24|36.87|85.44|68.00|68.25|
**同级比较:**
|模型|平均分|英文均分|中文均分|C-Eval|CMMLU|MMLU|HumanEval|MBPP|GSM8K|MATH|BBH|ARC-E|ARC-C|HellaSwag|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|TinyLlama-1.1B|25.36|25.55|24.525|25.02|24.03|24.3|6.71|19.91|2.27|0.74|28.78|60.77*|28.15*|58.33*|Qwen-1.8B|34.72|31.87|47.565|49.81|45.32|43.37|7.93|17.8|19.26|2.42|29.07|63.97*|43.69|59.28*|
|Qwen-1.8B|34.72|31.87|47.57|49.81|45.32|43.37|7.93|17.80|19.26|2.42|29.07|63.97*|43.69|59.28*|
|Gemini Nano-3B|-|-|-|-|-|-|-|27.2(report)|22.8(report)|-|42.4(report)|-|-|-|
|StableLM-Zephyr-3B|43.46|46.31|30.62|30.34|30.89|45.9|35.37|31.85|52.54|12.49|37.68|73.78|55.38|71.87*|
|Phi-2-2B|48.84|54.41|23.78|23.37|24.18|52.66|47.56|55.04|57.16|3.5|43.39|86.11|71.25|73.07*|
|MiniCPM-2B|52.33|52.6|51.10|51.13|51.07|53.46|50.00|47.31|53.83|10.24|36.87|85.44|68.00|68.25|
**Chat模型比较:**
|模型|平均分|英文均分|中文均分|C-Eval|CMMLU|MMLU|HumanEval|MBPP|GSM8K|MATH|BBH|ARC-E|ARC-C|HellaSwag|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|ChatGLM2-6B|37.98|35.17|50.63|52.05|49.21|45.77|10.37|9.38|22.74|5.96|32.6|74.45|56.82|58.48*|
|Mistral-7B-Instruct-v0.1|44.36|45.89|37.51|38.06|36.96|53.56|29.27|39.34|28.73|3.48|39.52|81.61|63.99|73.47*|
|Mistral-7B-Instruct-v0.2|50.91|52.83|42.235|42.55|41.92|60.51|36.59|48.95|40.49|4.95|39.81|86.28|73.38|84.55*|
|Qwen-7B-Chat|44.93|42.05|57.9|58.57|57.23|56.03|15.85|40.52|42.23|8.3|37.34|64.44*|39.25*|74.52*|
|Yi-6B-Chat|50.46|45.89|70.995|70.88|71.11|62.95|14.02|28.34|36.54|3.88|37.43|84.89|70.39|74.6*|
|Baichuan2-7B-Chat|44.68|42.74|53.39|53.28|53.5|53|21.34|32.32|25.25|6.32|37.46|79.63|60.15|69.23*|
|Deepseek-7B-chat|49.34|49.56|48.335|46.95|49.72|51.67|40.85|48.48|48.52|4.26|35.7|76.85|63.05|76.68*|
|Llama2-7B-Chat|38.16|39.17|33.59|34.54|32.64|47.64|14.02|27.4|21.15|2.08|35.54|74.28|54.78|75.65*|
|MiniCPM-2B|52.33|52.6|51.10|51.13|51.07|53.46|50.00|47.31|53.83|10.24|36.87|85.44|68.00|68.25|
**DPO后模型比较:**
|模型|MT-bench|
|---|---|
|GPT-4-turbo|9.32|
|GPT-3.5-turbo|8.39|
|Mistral-8*7b-Instruct-v0.1|8.30|
|Claude-2.1|8.18|
|Zephyr-7B-beta|7.34|
|**MiniCPM-2B**|**7.25**|
|Vicuna-33B|7.12|
|Zephyr-7B-alpha|6.88|
|LLaMA-2-70B-chat|6.86|
|Mistral-7B-Instruct-v0.1|6.84|
|MPT-34B-instruct|6.39|
### 快速上手
#### 在线体验
- [Colab](https://colab.research.google.com/drive/1tJcfPyWGWA5HezO7GKLeyeIso0HyOc0l?usp=sharing)
#### 基于Gradio的网页版Demo
* 使用如下命令启动基于Gradio的网页版demo:
```shell
# generation powered by vllm
python demo/minicpm/vllm_based_demo.py --model_path <vllmcpm_repo_path>
# generation powered by huggingface
python demo/minicpm/hf_based_demo.py --model_path <hf_repo_path>
```
#### HuggingFace 推理
##### MiniCPM-2B
安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-dpo-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.5, top_p=0.8, repetition_penalty=1.02)
print(responds)
```
##### MiniCPM-2B (Llama Format)
我们将MiniCPM的模型权重转化成了Llama代码可以直接调用的[格式](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16-llama-format),以便大家尝试:
```python
import torch
from transformers import LlamaTokenizerFast, LlamaForCausalLM
model_path = "openbmb/MiniCPM-2B-dpo-bf16-llama-format"
tokenizer = LlamaTokenizerFast.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
prompt="Now you act like a terminal situated within a beginner's C++ practice repository folder, please provide the output for the command: `ls -l`"
input_ids = tokenizer.encode("<用户>{}<AI>".format(prompt), return_tensors='pt', add_special_tokens=True).cuda()
responds = model.generate(input_ids, temperature=0.3, top_p=0.8, repetition_penalty=1.02, max_length=1024)
responds = tokenizer.decode(responds[0], skip_special_tokens=True)
print(responds)
```
#### vLLM 推理
安装 [vLLM](https://github.com/vllm-project/vllm)
```shell
pip install "vllm>=0.4.1"
```
具体推理代码见[这里](#vllm)
#### SGLang 推理
安装 [SGLang](https://github.com/sgl-project/sglang)
* 首先需要启动一个服务:
```bash
python -m sglang.launch_server --model-path openbmb/MiniCPM-2B-dpo-fp16 --trust-remote-code --port 30000
```
* 下面是一个推理代码的样例:
```python
from sglang import function, gen, set_default_backend, RuntimeEndpoint
@function
def text_qa(s, question):
s += "<用户>" + question + "<AI>"
s += gen("answer", max_tokens=1024, temperature=0.7, top_p=0.7)
set_default_backend(RuntimeEndpoint("http://localhost:30000"))
state = text_qa.run(
question="What is the capital of China?",
)
print(state["answer"])
```
#### llama.cpp、Ollama、fastllm、mlx_lm推理
MiniCPM支持[llama.cpp](https://github.com/ggerganov/llama.cpp/)[ollama](https://github.com/ollama/ollama)[fastllm](https://github.com/ztxz16/fastllm)[mlx_lm](https://github.com/ml-explore/mlx-examples)推理。感谢[@runfuture](https://github.com/runfuture)对llama.cpp和ollama的适配。
请参考 MiniCPM 知识库中的[边端部署教程](https://modelbest.feishu.cn/wiki/VL5kw9DsEiRDmJkEyTUcydE0nie)
#### 模型量化
请参考 MiniCPM 知识库中的[量化指南](https://modelbest.feishu.cn/wiki/EatbwdLuvitbbMk2X5wcX6h5n7c)
#### 模型微调
- 一张 1080/2080 可实现高效参数微调:[代码](https://github.com/OpenBMB/MiniCPM/tree/main/finetune)
- mlx 微调:[教程](https://modelbest.feishu.cn/wiki/AIU3wbREcirOm9kkvd7cxujFnMb#share-ASrDdvFAloHtycxfy85cLNhAnd3)
- [xtuner](https://github.com/InternLM/xtuner): [MiniCPM高效率微调的不二选择](https://modelbest.feishu.cn/wiki/AIU3wbREcirOm9kkvd7cxujFnMb#AMdXdzz8qoadZhxU4EucELWznzd)
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory.git)[MiniCPM微调一键式解决方案](https://modelbest.feishu.cn/wiki/AIU3wbREcirOm9kkvd7cxujFnMb#BAWrdSjXuoFvX4xuIuzc8Amln5E)
</details>
## 开源协议
#### 模型协议
* 本仓库中代码与 MiniCPM 模型权重依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源
#### 声明
* 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
* 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。
* 如果由于使用 MiniCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
## 开发机构
本项目由以下机构共同开发:
- <img src="assets/modelbest.png" width="28px"> [面壁智能](https://modelbest.cn/)
- <img src="assets/thunlp.png" width="28px"> [清华大学自然语言处理实验室](https://nlp.csai.tsinghua.edu.cn/)
- <img src="assets/RUC.png" width="28px"> [人大高瓴人工智能学院](https://linyankai.github.io/)
## 工作引用
* 如果觉得 MiniCPM 有助于您的工作,请引用我们的论文:[MiniCPM1](https://arxiv.org/abs/2404.06395)[MiniCPM4](https://github.com/OpenBMB/MiniCPM/blob/main/report/MiniCPM_4_Technical_Report.pdf)
```
@article{minicpm4,
title={MiniCPM4: Ultra-Efficient LLMs on End Devices},
author={MiniCPM Team},
year={2025}
}
@inproceedings{huminicpm,
title={MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies},
author={Hu, Shengding and Tu, Yuge and Han, Xu and Cui, Ganqu and He, Chaoqun and Zhao, Weilin and Long, Xiang and Zheng, Zhi and Fang, Yewei and Huang, Yuxiang and others},
booktitle={First Conference on Language Modeling},
year={2024}
}
```
from typing import List
import argparse
import gradio as gr
import torch
from threading import Thread
from PIL import Image
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
TextIteratorStreamer
)
import warnings
warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated')
parser = argparse.ArgumentParser()
parser.add_argument("--model_path", type=str, default="openbmb/MiniCPM-2B-dpo-fp16")
parser.add_argument("--torch_dtype", type=str, default="bfloat16", choices=["float32", "bfloat16", "float16"])
parser.add_argument("--server_name", type=str, default="127.0.0.1")
parser.add_argument("--server_port", type=int, default=7860)
args = parser.parse_args()
# init model torch dtype
torch_dtype = args.torch_dtype
if torch_dtype == "" or torch_dtype == "bfloat16":
torch_dtype = torch.bfloat16
elif torch_dtype == "float32":
torch_dtype = torch.float32
elif torch_dtype == "float16":
torch_dtype = torch.float16
else:
raise ValueError(f"Invalid torch dtype: {torch_dtype}")
# init model and tokenizer
path = args.model_path
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch_dtype, device_map="cuda:0", trust_remote_code=True)
model_architectures = model.config.architectures[0]
def check_model_v(img_file_path: str = None):
'''
check model is MiniCPMV
Args:
img_file_path (str): Image filepath
Returns:
Ture if model is MiniCPMV else False
'''
if "MiniCPMV" in model_architectures:
return True
if isinstance(img_file_path, str):
gr.Warning('Only MiniCPMV model can support Image')
return False
if check_model_v():
model = model.to(dtype=torch.bfloat16)
# init gradio demo host and port
server_name = args.server_name
server_port = args.server_port
def hf_gen(dialog: List, top_p: float, temperature: float, repetition_penalty: float, max_dec_len: int):
"""generate model output with huggingface api
Args:
query (str): actual model input.
top_p (float): only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature (float): Strictly positive float value used to modulate the logits distribution.
max_dec_len (int): The maximum numbers of tokens to generate.
Yields:
str: real-time generation results of hf model
"""
inputs = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
enc = tokenizer(inputs, return_tensors="pt").to(next(model.parameters()).device)
streamer = TextIteratorStreamer(tokenizer)
generation_kwargs = dict(
enc,
do_sample=True,
top_k=0,
top_p=top_p,
temperature=temperature,
repetition_penalty=repetition_penalty,
max_new_tokens=max_dec_len,
pad_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
answer = ""
for new_text in streamer:
answer += new_text
yield answer[4 + len(inputs):]
def hf_v_gen(dialog: List, top_p: float, temperature: float, repetition_penalty: float, max_dec_len: int,
img_file_path: str):
"""generate model output with huggingface api
Args:
query (str): actual model input.
top_p (float): only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature (float): Strictly positive float value used to modulate the logits distribution.
max_dec_len (int): The maximum numbers of tokens to generate.
img_file_path (str): Image filepath.
Yields:
str: real-time generation results of hf model
"""
assert isinstance(img_file_path, str), 'Image must not be empty'
img = Image.open(img_file_path).convert('RGB')
generation_kwargs = dict(
image=img,
msgs=dialog,
context=None,
tokenizer=tokenizer,
sampling=True,
temperature=temperature,
top_p=top_p,
repetition_penalty=repetition_penalty,
max_new_tokens=max_dec_len
)
res, context, _ = model.chat(**generation_kwargs)
return res
def generate(chat_history: List, query: str, top_p: float, temperature: float, repetition_penalty: float, max_dec_len: int,
img_file_path: str = None):
"""generate after hitting "submit" button
Args:
chat_history (List): [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. list that stores all QA records
query (str): query of current round
top_p (float): only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature (float): strictly positive float value used to modulate the logits distribution.
max_dec_len (int): The maximum numbers of tokens to generate.
img_file_path (str): Image filepath.
Yields:
List: [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n], [q_n+1, a_n+1]]. chat_history + QA of current round.
"""
assert query != "", "Input must not be empty!!!"
# apply chat template
model_input = []
for q, a in chat_history:
model_input.append({"role": "user", "content": q})
model_input.append({"role": "assistant", "content": a})
model_input.append({"role": "user", "content": query})
# yield model generation
chat_history.append([query, ""])
if check_model_v():
chat_history[-1][1] = hf_v_gen(model_input, top_p, temperature, repetition_penalty, max_dec_len, img_file_path)
yield gr.update(value=""), chat_history
return
for answer in hf_gen(model_input, top_p, temperature, repetition_penalty, max_dec_len):
chat_history[-1][1] = answer.strip("</s>")
yield gr.update(value=""), chat_history
def regenerate(chat_history: List, top_p: float, temperature: float, repetition_penalty: float, max_dec_len: int,
img_file_path: str = None):
"""re-generate the answer of last round's query
Args:
chat_history (List): [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. list that stores all QA records
top_p (float): only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature (float): strictly positive float value used to modulate the logits distribution.
max_dec_len (int): The maximum numbers of tokens to generate.
img_file_path (str): Image filepath.
Yields:
List: [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. chat_history
"""
assert len(chat_history) >= 1, "History is empty. Nothing to regenerate!!"
# apply chat template
model_input = []
for q, a in chat_history[:-1]:
model_input.append({"role": "user", "content": q})
model_input.append({"role": "assistant", "content": a})
model_input.append({"role": "user", "content": chat_history[-1][0]})
# yield model generation
if check_model_v():
chat_history[-1][1] = hf_v_gen(model_input, top_p, temperature, repetition_penalty, max_dec_len, img_file_path)
yield gr.update(value=""), chat_history
return
for answer in hf_gen(model_input, top_p, temperature, repetition_penalty, max_dec_len):
chat_history[-1][1] = answer.strip("</s>")
yield gr.update(value=""), chat_history
def clear_history():
"""clear all chat history
Returns:
List: empty chat history
"""
return []
def reverse_last_round(chat_history):
"""reverse last round QA and keep the chat history before
Args:
chat_history (List): [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. list that stores all QA records
Returns:
List: [[q_1, a_1], [q_2, a_2], ..., [q_n-1, a_n-1]]. chat_history without last round.
"""
assert len(chat_history) >= 1, "History is empty. Nothing to reverse!!"
return chat_history[:-1]
# launch gradio demo
with gr.Blocks(theme="soft") as demo:
gr.Markdown("""# MiniCPM Gradio Demo""")
with gr.Row():
with gr.Column(scale=1):
top_p = gr.Slider(0, 1, value=0.8, step=0.1, label="top_p")
temperature = gr.Slider(0.1, 2.0, value=0.5, step=0.1, label="temperature")
repetition_penalty = gr.Slider(0.1, 2.0, value=1.1, step=0.1, label="repetition_penalty")
max_dec_len = gr.Slider(1, 1024, value=1024, step=1, label="max_dec_len")
img_file_path = gr.Image(label="upload image", type='filepath', show_label=False)
with gr.Column(scale=5):
chatbot = gr.Chatbot(bubble_full_width=False, height=400)
user_input = gr.Textbox(label="User", placeholder="Input your query here!", lines=8)
with gr.Row():
submit = gr.Button("Submit")
clear = gr.Button("Clear")
regen = gr.Button("Regenerate")
reverse = gr.Button("Reverse")
img_file_path.change(check_model_v, inputs=[img_file_path], outputs=[])
submit.click(generate, inputs=[chatbot, user_input, top_p, temperature, repetition_penalty,
max_dec_len, img_file_path], outputs=[user_input, chatbot])
regen.click(regenerate, inputs=[chatbot, top_p, temperature, repetition_penalty,
max_dec_len, img_file_path], outputs=[user_input, chatbot])
clear.click(clear_history, inputs=[], outputs=[chatbot])
reverse.click(reverse_last_round, inputs=[chatbot], outputs=[chatbot])
demo.queue()
demo.launch(server_name=server_name, server_port=server_port, show_error=True)
"""
my package: langchain_demo
langchain 0.2.6
langchain-community 0.2.1
langchain-core 0.2.19
langchain-text-splitters 0.2.0
langchainplus-sdk 0.0.20
pypdf 4.3.0
pydantic 2.8.2
pydantic_core 2.20.1
transformers 4.41.1
triton 2.3.0
trl 0.8.6
vllm 0.5.0.post1+cu122
vllm-flash-attn 2.5.9
vllm_nccl_cu12 2.18.1.0.4.0
你只需要最少6g显存(足够)的显卡就能在消费级显卡上体验流畅的rag。
使用方法:
1. 运行pull_request/rag/langchain_demo.py
2. 上传pdf/txt文件(同一目录下可传多个)
3. 输入问题。
极低显存(4g)使用方法:
1. 根据MiniCPM/quantize/readme.md进行量化,推荐量化MiniCPM-1B-sft-bf16
2. 将cpm_model_path修改为量化后模型地址
3. 保证encode_model_device设置为cpu
"""
from langchain.document_loaders import PyPDFLoader, TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.embeddings.huggingface import HuggingFaceBgeEmbeddings
from argparse import ArgumentParser
from langchain.llms.base import LLM
from typing import Any, List, Optional
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
from langchain.prompts import PromptTemplate
from pydantic.v1 import Field
import re
import gradio as gr
parser = ArgumentParser()
# 大语言模型参数设置
parser.add_argument(
"--cpm_model_path",
type=str,
default="openbmb/MiniCPM-1B-sft-bf16",
help="MiniCPM模型路径或者huggingface id"
)
parser.add_argument(
"--cpm_device", type=str, default="cuda:0", choices=["auto", "cuda:0"],
help="MiniCPM模型所在设备,默认为cuda:0"
)
parser.add_argument("--backend", type=str, default="torch", choices=["torch", "vllm"],
help="使用torch还是vllm后端,默认为torch"
)
# 嵌入模型参数设置
parser.add_argument(
"--encode_model", type=str, default="BAAI/bge-base-zh",
help="用于召回编码的embedding模型,默认为BAAI/bge-base-zh,可输入本地地址"
)
parser.add_argument(
"--encode_model_device", type=str, default="cpu", choices=["cpu", "cuda:0"],
help="用于召回编码的embedding模型所在设备,默认为cpu"
)
parser.add_argument("--query_instruction", type=str, default="",help="召回时增加的前缀")
parser.add_argument(
"--file_path", type=str, default="/root/ld/pull_request/rag/红楼梦.pdf",
help="需要检索的文本文件路径,gradio运行时无效"
)
# 生成参数
parser.add_argument("--top_k", type=int, default=3)
parser.add_argument("--top_p", type=float, default=0.7)
parser.add_argument("--temperature", type=float, default=0.7)
parser.add_argument("--max_new_tokens", type=int, default=4096)
parser.add_argument("--repetition_penalty", type=float, default=1.02)
# retriever参数设置
parser.add_argument("--embed_top_k", type=int, default=5,help="召回几个最相似的文本")
parser.add_argument("--chunk_size", type=int, default=256,help="文本切分时切分的长度")
parser.add_argument("--chunk_overlap", type=int, default=50,help="文本切分的重叠长度")
args = parser.parse_args()
def clean_text(text):
"""
清理文本,去除中英文字符、数字及常见标点。
参数:
text (str): 需要清理的原始文本。
返回:
str: 清理后的文本。
"""
# 定义需要去除的字符模式:中文、英文、数字、常见标点
pattern = r'[\u4e00-\u9fa5]|[A-Za-z0-9]|[.,;!?()"\']'
# 使用正则表达式替换这些字符为空字符串
cleaned_text = re.sub(pattern, "", text)
# 去除多余的空格
cleaned_text = re.sub(r"\s+", " ", cleaned_text)
return cleaned_text
class MiniCPM_LLM(LLM):
tokenizer: Any = Field(default=None)
model: Any = Field(default=None)
def __init__(self, model_path: str):
"""
继承langchain的MiniCPM模型
参数:
model_path (str): 需要加载的MiniCPM模型路径。
返回:
self.model: 加载的MiniCPM模型。
self.tokenizer: 加载的MiniCPM模型的tokenizer。
"""
super().__init__()
if args.backend == "vllm":
from vllm import LLM
self.model = LLM(
model=model_path, trust_remote_code=True, enforce_eager=True
)
else:
self.tokenizer = AutoTokenizer.from_pretrained(
model_path, trust_remote_code=True
)
self.model = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True, torch_dtype=torch.float16
).to(args.cpm_device)
self.model = self.model.eval()
def _call(self, prompt, stop: Optional[List[str]] = None):
"""
langchain.llm的调用
参数:
prompt (str): 传入的prompt文本
返回:
responds (str): 模型在prompt下生成的文本
"""
if args.backend == "torch":
inputs = self.tokenizer("<用户>{}".format(prompt), return_tensors="pt")
inputs = inputs.to(args.cpm_device)
# Generate
generate_ids = self.model.generate(
inputs.input_ids,
max_length=args.max_new_tokens,
temperature=args.temperature,
top_p=args.top_p,
repetition_penalty=args.repetition_penalty,
)
responds = self.tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False,
)[0]
# responds, history = self.model.chat(self.tokenizer, prompt, temperature=args.temperature, top_p=args.top_p, repetition_penalty=1.02)
else:
from vllm import SamplingParams
params_dict = {
"n": 1,
"best_of": 1,
"presence_penalty": args.repetition_penalty,
"frequency_penalty": 0.0,
"temperature": args.temperature,
"top_p": args.top_p,
"top_k": args.top_k,
"use_beam_search": False,
"length_penalty": 1,
"early_stopping": False,
"stop": None,
"stop_token_ids": None,
"ignore_eos": False,
"max_tokens": args.max_new_tokens,
"logprobs": None,
"prompt_logprobs": None,
"skip_special_tokens": True,
}
sampling_params = SamplingParams(**params_dict)
prompt = "<用户>{}<AI>".format(prompt)
responds = self.model.generate(prompt, sampling_params)
responds = responds[0].outputs[0].text
return responds
@property
def _llm_type(self) -> str:
return "MiniCPM_LLM"
# 加载PDF和TXT文件
def load_documents(file_paths):
"""
加载文本和pdf文件中的字符串,并进行简单的清洗
参数:
file_paths (str or list): 传入的文件地址或者文件列表
返回:
documents (list): 读取的文本列表
"""
files_list = []
if type(file_paths) == list:
files_list = file_paths
else:
files_list = [file_paths]
documents = []
for file_path in files_list:
if file_path.endswith(".pdf"):
loader = PyPDFLoader(file_path)
elif file_path.endswith(".txt"):
loader = TextLoader(file_path)
else:
raise ValueError("Unsupported file type")
doc = loader.load()
doc[0].page_content = clean_text(doc[0].page_content)
documents.extend(doc)
return documents
def load_models():
"""
加载模型和embedding模型
返回:
llm: MiniCPM模型
embedding_models: embedding模型
"""
llm = MiniCPM_LLM(model_path=args.cpm_model_path)
embedding_models = HuggingFaceBgeEmbeddings(
model_name=args.encode_model,
model_kwargs={"device": args.encode_model_device}, # 或者 'cuda' 如果你有GPU
encode_kwargs={
"normalize_embeddings": True, # 是否归一化嵌入
"show_progress_bar": True, # 是否显示进度条
"convert_to_numpy": True, # 是否将输出转换为numpy数组
"batch_size": 8, # 批处理大小'
},
query_instruction=args.query_instruction,
)
return llm, embedding_models
# 分割并嵌入文档
def embed_documents(documents, embedding_models):
"""
对文档进行分割和嵌入
参数:
documents (list): 读取的文本列表
embedding_models: embedding模型
返回:
vectorstore:向量数据库
"""
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=args.chunk_size, chunk_overlap=args.chunk_overlap
)
texts = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(texts, embedding_models)
return vectorstore
def create_prompt_template():
"""
创建自定义的prompt模板
返回:
PROMPT:自定义的prompt模板
"""
custom_prompt_template = """请使用以下内容片段对问题进行最终回复,如果内容中没有提到的信息不要瞎猜,严格按照内容进行回答,不要编造答案,如果无法从内容中找到答案,请回答“片段中未提及,无法回答”,不要编造答案。
Context:
{context}
Question: {question}
FINAL ANSWER:"""
PROMPT = PromptTemplate(
template=custom_prompt_template, input_variables=["context", "question"]
)
return PROMPT
# 创建RAG链
def create_rag_chain(llm, prompt):
# qa=load_qa_with_sources_chain(llm, chain_type="stuff")
qa = prompt | llm
return qa
def analysis_links(docs):
"""
分析链接
参数:
docs (list): 读取的文本列表
返回:
links_string:相关文档引用字符串,docname page content
示例:
>>> docs = [
... {'source': 'Document1', 'page': 1, 'content': 'This is the first document.'},
... {'source': 'Document2', 'page': 2, 'content': 'This is the second document.'}
... ]
>>> extract_links(docs)
'Document1 page:1 \n\nThis is the first document.\nDocument2 page:2 \n\nThis is the second document.'
"""
links_string = ""
for i in docs:
i.metadata["source"] = i.metadata["source"].split("/")[-1]
i.metadata["content"] = i.page_content
links_string += f"{i.metadata['source']} page:{i.metadata['page']}\n\n{i.metadata['content']}\n\n"
return links_string
# 主函数
def main():
# 加载文档
documents = load_documents(args.file_path)
# 嵌入文档
vectorstore = embed_documents(documents, embedding_models)
# 自建prompt模版
Prompt = create_prompt_template()
# 创建RAG链
rag_chain = create_rag_chain(llm, Prompt)
# 用户查询
while True:
query = input("请输入查询:")
if query == "exit":
break
docs = vectorstore.similarity_search(query, k=args.embed_top_k)
all_links = analysis_links(docs)
final_result = rag_chain.invoke({"context": all_links, "question": query})
# result = rag_chain({"input_documents": docs, "question": query}, return_only_outputs=True)
print(final_result)
exist_file = None
def process_query(file, query):
global exist_file, documents, vectorstore, rag_chain
if file != exist_file:
# 加载文档
documents = load_documents(file if isinstance(file, list) else file.name)
# 嵌入文档
vectorstore = embed_documents(documents, embedding_models)
# 自建prompt模版
Prompt = create_prompt_template()
# 创建RAG链
rag_chain = create_rag_chain(llm, Prompt)
exist_file = file
# 搜索并获取结果
docs = vectorstore.similarity_search(query, k=args.embed_top_k)
all_links = analysis_links(docs)
final_result = rag_chain.invoke({"context": all_links, "question": query})
# result = rag_chain({"input_documents": docs, "question": query}, return_only_outputs=False)
print(final_result)
final_result = final_result.split("FINAL ANSWER:")[-1]
return final_result, all_links
if __name__ == "__main__":
llm, embedding_models = load_models()
# 如果不需要web界面可以直接运行main函数
#main()
with gr.Blocks(css="#textbox { height: 380%; }") as demo:
with gr.Row():
with gr.Column():
link_content = gr.Textbox(label="link_content", lines=30, max_lines=40)
with gr.Column():
file_input = gr.File(label="upload_files", file_count="multiple")
final_anser = gr.Textbox(label="final_anser", lines=5, max_lines=10)
query_input = gr.Textbox(
label="User",
placeholder="Input your query here!",
lines=5,
max_lines=10,
)
submit_button = gr.Button("Submit")
submit_button.click(
fn=process_query,
inputs=[file_input, query_input],
outputs=[final_anser, link_content],
)
demo.launch(share=True, show_error=True)
"""
使用 MLX 快速推理 MiniCPM
如果你使用 Mac 设备进行推理,可以直接使用MLX进行推理。
由于 MiniCPM 暂时不支持 mlx 格式转换。您可以下载由 MLX 社群转换好的模型 [MiniCPM-2B-sft-bf16-llama-format-mlx](https://huggingface.co/mlx-community/MiniCPM-2B-sft-bf16-llama-format-mlx)。
并安装对应的依赖包
```bash
pip install mlx-lm
```
这是一个简单的推理代码,使用 Mac 设备推理 MiniCPM-2
```python
python -m mlx_lm.generate --model mlx-community/MiniCPM-2B-sft-bf16-llama-format-mlx --prompt "hello, tell me a joke." --trust-remote-code
```
"""
from mlx_lm import load, generate
from jinja2 import Template
def chat_with_model():
model, tokenizer = load("mlx-community/MiniCPM-2B-sft-bf16-llama-format-mlx")
print("Model loaded. Start chatting! (Type 'quit' to stop)")
messages = []
chat_template = Template(
"{% for message in messages %}{% if message['role'] == 'user' %}{{'<用户>' + message['content'].strip() + '<AI>'}}{% else %}{{message['content'].strip()}}{% endif %}{% endfor %}")
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
messages.append({"role": "user", "content": user_input})
response = generate(model, tokenizer, prompt=chat_template.render(messages=messages), verbose=True)
print("Model:", response)
messages.append({"role": "ai", "content": response})
chat_with_model()
from typing import List
import argparse
import gradio as gr
from vllm import LLM, SamplingParams
import torch
from transformers import AutoTokenizer
parser = argparse.ArgumentParser()
parser.add_argument("--model_path", type=str, default="openbmb/MiniCPM-1B-sft-bf16")
parser.add_argument("--torch_dtype", type=str, default="bfloat16", choices=["float32", "bfloat16"])
parser.add_argument("--server_name", type=str, default="127.0.0.1")
parser.add_argument("--server_port", type=int, default=7860)
parser.add_argument("--max_tokens", type=int, default=2048)
# for MiniCPM-1B and MiniCPM-2B model, max_tokens should be set to 2048
args = parser.parse_args()
# init model torch dtype
torch_dtype = args.torch_dtype
if torch_dtype == "" or torch_dtype == "bfloat16":
torch_dtype = torch.bfloat16
elif torch_dtype == "float32":
torch_dtype = torch.float32
elif torch_dtype == "float16":
torch_dtype = torch.float16
else:
raise ValueError(f"Invalid torch dtype: {torch_dtype}")
# init model and tokenizer
path = args.model_path
llm = LLM(
model=path,
tensor_parallel_size=1,
dtype=torch_dtype,
trust_remote_code=True,
gpu_memory_utilization=0.9,
max_model_len=args.max_tokens
)
tokenizer = AutoTokenizer.from_pretrained(args.model_path, trust_remote_code=True)
server_name = args.server_name
server_port = args.server_port
def vllm_gen(dialog: List, top_p: float, temperature: float, max_dec_len: int):
"""generate model output with huggingface api
Args:
query (str): actual model input.
top_p (float): only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature (float): Strictly positive float value used to modulate the logits distribution.
max_dec_len (int): The maximum numbers of tokens to generate.
Yields:
str: real-time generation results of hf model
"""
assert len(dialog) % 2 == 1
prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
token_ids = tokenizer.convert_tokens_to_ids(["<|im_end|>"])
params_dict = {
"n": 1,
"best_of": 1,
"presence_penalty": 1.0,
"frequency_penalty": 0.0,
"temperature": temperature,
"top_p": top_p,
"top_k": -1,
"use_beam_search": False,
"length_penalty": 1,
"early_stopping": False,
"stop": "<|im_end|>",
"stop_token_ids": token_ids,
"ignore_eos": False,
"max_tokens": max_dec_len,
"logprobs": None,
"prompt_logprobs": None,
"skip_special_tokens": True,
}
sampling_params = SamplingParams(**params_dict)
outputs = llm.generate(prompts=prompt, sampling_params=sampling_params)[0]
generated_text = outputs.outputs[0].text
return generated_text
def generate(chat_history: List, query: str, top_p: float, temperature: float, max_dec_len: int):
"""generate after hitting "submit" button
Args:
chat_history (List): [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. list that stores all QA records
query (str): query of current round
top_p (float): only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature (float): strictly positive float value used to modulate the logits distribution.
max_dec_len (int): The maximum numbers of tokens to generate.
Yields:
List: [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n], [q_n+1, a_n+1]]. chat_history + QA of current round.
"""
assert query != "", "Input must not be empty!!!"
# apply chat template
model_input = []
for q, a in chat_history:
model_input.append({"role": "user", "content": q})
model_input.append({"role": "assistant", "content": a})
model_input.append({"role": "user", "content": query})
# yield model generation
model_output = vllm_gen(model_input, top_p, temperature, max_dec_len)
chat_history.append([query, model_output])
return gr.update(value=""), chat_history
def regenerate(chat_history: List, top_p: float, temperature: float, max_dec_len: int):
"""re-generate the answer of last round's query
Args:
chat_history (List): [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. list that stores all QA records
top_p (float): only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature (float): strictly positive float value used to modulate the logits distribution.
max_dec_len (int): The maximum numbers of tokens to generate.
Yields:
List: [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. chat_history
"""
assert len(chat_history) >= 1, "History is empty. Nothing to regenerate!!"
# apply chat template
model_input = []
for q, a in chat_history[:-1]:
model_input.append({"role": "user", "content": q})
model_input.append({"role": "assistant", "content": a})
model_input.append({"role": "user", "content": chat_history[-1][0]})
# yield model generation
model_output = vllm_gen(model_input, top_p, temperature, max_dec_len)
chat_history[-1][1] = model_output
return gr.update(value=""), chat_history
def clear_history():
"""clear all chat history
Returns:
List: empty chat history
"""
return []
def reverse_last_round(chat_history):
"""reverse last round QA and keep the chat history before
Args:
chat_history (List): [[q_1, a_1], [q_2, a_2], ..., [q_n, a_n]]. list that stores all QA records
Returns:
List: [[q_1, a_1], [q_2, a_2], ..., [q_n-1, a_n-1]]. chat_history without last round.
"""
assert len(chat_history) >= 1, "History is empty. Nothing to reverse!!"
return chat_history[:-1]
# launch gradio demo
with gr.Blocks(theme="soft") as demo:
gr.Markdown("""# MiniCPM Gradio Demo""")
with gr.Row():
with gr.Column(scale=1):
top_p = gr.Slider(0, 1, value=0.8, step=0.1, label="top_p")
temperature = gr.Slider(0.1, 2.0, value=0.5, step=0.1, label="temperature")
max_dec_len = gr.Slider(1, args.max_tokens, value=args.max_tokens, step=1, label="max_tokens")
with gr.Column(scale=5):
chatbot = gr.Chatbot(bubble_full_width=False, height=400)
user_input = gr.Textbox(label="User", placeholder="Input your query here!", lines=8)
with gr.Row():
submit = gr.Button("Submit")
clear = gr.Button("Clear")
regen = gr.Button("Regenerate")
reverse = gr.Button("Reverse")
submit.click(generate, inputs=[chatbot, user_input, top_p, temperature, max_dec_len], outputs=[user_input, chatbot])
regen.click(regenerate, inputs=[chatbot, top_p, temperature, max_dec_len], outputs=[user_input, chatbot])
clear.click(clear_history, inputs=[], outputs=[chatbot])
reverse.click(reverse_last_round, inputs=[chatbot], outputs=[chatbot])
demo.queue()
demo.launch(server_name=server_name, server_port=server_port, show_error=True)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment