README.md 12.8 KB
Newer Older
Tong Gao's avatar
Tong Gao committed
1
2
3
4
5
<div align="center">
  <img src="docs/en/_static/image/logo.svg" width="500px"/>
  <br />
  <br />

Hubert's avatar
Hubert committed
6
[![docs](https://readthedocs.org/projects/opencompass/badge)](https://opencompass.readthedocs.io/en)
Songyang Zhang's avatar
Songyang Zhang committed
7
[![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](https://github.com/open-compass/opencompass/blob/main/LICENSE)
Tong Gao's avatar
Tong Gao committed
8

Hubert's avatar
Hubert committed
9
<!-- [![PyPI](https://badge.fury.io/py/opencompass.svg)](https://pypi.org/project/opencompass/) -->
Tong Gao's avatar
Tong Gao committed
10

gaotongxiao's avatar
gaotongxiao committed
11
[🌐Website](https://opencompass.org.cn/) |
Tong Gao's avatar
Tong Gao committed
12
[📘Documentation](https://opencompass.readthedocs.io/en/latest/) |
13
[🛠️Installation](https://opencompass.readthedocs.io/en/latest/get_started/installation.html) |
Songyang Zhang's avatar
Songyang Zhang committed
14
[🤔Reporting Issues](https://github.com/open-compass/opencompass/issues/new/choose)
Tong Gao's avatar
Tong Gao committed
15
16
17
18
19

English | [简体中文](README_zh-CN.md)

</div>

20
<p align="center">
21
    👋 join us on <a href="https://discord.gg/KKwfEbFj7U" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=opencompass" target="_blank">WeChat</a>
22
23
</p>

Songyang Zhang's avatar
Songyang Zhang committed
24
25
26
## 🧭	Welcome

to **OpenCompass**!
Tong Gao's avatar
Tong Gao committed
27
28
29

Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models.

Songyang Zhang's avatar
Songyang Zhang committed
30
> **🔥 Attention**<br />
Sakshi Umredkar's avatar
Sakshi Umredkar committed
31
> We launch the OpenCompass Collaboration project, welcome to support diverse evaluation benchmarks into OpenCompass!
Songyang Zhang's avatar
Songyang Zhang committed
32
> Clike [Issue](https://github.com/open-compass/opencompass/issues/248) for more information.
Songyang Zhang's avatar
Songyang Zhang committed
33
34
> Let's work together to build a more powerful OpenCompass toolkit!

Songyang Zhang's avatar
Songyang Zhang committed
35
## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
36

Leymore's avatar
Leymore committed
37
- **\[2023.09.26\]** We update the leaderboard with [Qwen](https://github.com/QwenLM/Qwen), one of the best-performing open-source models currently available, welcome to our [homepage](https://opencompass.org.cn) for more details. 🔥🔥🔥.
Leymore's avatar
Leymore committed
38
- **\[2023.09.20\]** We update the leaderboard with [InternLM-20B](https://github.com/InternLM/InternLM), welcome to our [homepage](https://opencompass.org.cn) for more details. 🔥🔥🔥.
Leymore's avatar
Leymore committed
39
40
- **\[2023.09.19\]** We update the leaderboard with WeMix-LLaMA2-70B/Phi-1.5-1.3B, welcome to our [homepage](https://opencompass.org.cn) for more details.
- **\[2023.09.18\]** We have released [long context evaluation guidance](docs/en/advanced_guides/longeval.md).
Leymore's avatar
Leymore committed
41
42
- **\[2023.09.08\]** We update the leaderboard with Baichuan-2/Tigerbot-2/Vicuna-v1.5, welcome to our [homepage](https://opencompass.org.cn) for more details.
- **\[2023.09.06\]**  [**Baichuan2**](https://github.com/baichuan-inc/Baichuan2) team adpots OpenCompass to evaluate their models systematically. We deeply appreciate the community's dedication to transparency and reproducibility in LLM evaluation.
Songyang Zhang's avatar
Songyang Zhang committed
43
- **\[2023.09.02\]** We have supported the evaluation of [Qwen-VL](https://github.com/QwenLM/Qwen-VL) in OpenCompass.
Songyang Zhang's avatar
Songyang Zhang committed
44
- **\[2023.08.25\]**  [**TigerBot**](https://github.com/TigerResearch/TigerBot) team adpots OpenCompass to evaluate their models systematically. We deeply appreciate the community's dedication to transparency and reproducibility in LLM evaluation.
Songyang Zhang's avatar
Songyang Zhang committed
45
- **\[2023.08.21\]** [**Lagent**](https://github.com/InternLM/lagent) has been released, which is a lightweight framework for building LLM-based agents. We are working with Lagent team to support the evaluation of general tool-use capability, stay tuned!
Songyang Zhang's avatar
Songyang Zhang committed
46
47

> [More](docs/en/notes/news.md)
Yuan Liu's avatar
Yuan Liu committed
48

Songyang Zhang's avatar
Songyang Zhang committed
49
## ✨ Introduction
Tong Gao's avatar
Tong Gao committed
50

51
52
![image](https://github.com/open-compass/opencompass/assets/22607038/f45fe125-4aed-4f8c-8fe8-df4efb41a8ea)

Tong Gao's avatar
Tong Gao committed
53
54
OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Its main features includes:

Leymore's avatar
Leymore committed
55
- **Comprehensive support for models and datasets**: Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 70+ datasets with about 400,000 questions, comprehensively evaluating the capabilities of the models in five dimensions.
Tong Gao's avatar
Tong Gao committed
56
57
58
59
60
61
62
63
64

- **Efficient distributed evaluation**: One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours.

- **Diversified evaluation paradigms**: Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue type prompt templates, to easily stimulate the maximum performance of various models.

- **Modular design with high extensibility**: Want to add new models or datasets, customize an advanced task division strategy, or even support a new cluster management system? Everything about OpenCompass can be easily expanded!

- **Experiment management and reporting mechanism**: Use config files to fully record each experiment, support real-time reporting of results.

Songyang Zhang's avatar
Songyang Zhang committed
65
## 📊 Leaderboard
Tong Gao's avatar
Tong Gao committed
66
67
68

We provide [OpenCompass Leaderbaord](https://opencompass.org.cn/rank) for community to rank all public models and API models. If you would like to join the evaluation, please provide the model repository URL or a standard API interface to the email address `opencompass@pjlab.org.cn`.

Songyang Zhang's avatar
Songyang Zhang committed
69
<p align="right"><a href="#top">🔝Back to top</a></p>
Tong Gao's avatar
Tong Gao committed
70

Leymore's avatar
Leymore committed
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
## 🛠️ Installation

Below are the steps for quick installation and datasets preparation.

```Python
conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
# Download dataset to data/ folder
wget https://github.com/open-compass/opencompass/releases/download/0.1.1/OpenCompassData.zip
unzip OpenCompassData.zip
```

86
Some third-party features, like Humaneval and Llama, may require additional steps to work properly, for detailed steps please refer to the [Installation Guide](https://opencompass.readthedocs.io/en/latest/get_started/installation.html).
Leymore's avatar
Leymore committed
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117

<p align="right"><a href="#top">🔝Back to top</a></p>

## 🏗️ ️Evaluation

After ensuring that OpenCompass is installed correctly according to the above steps and the datasets are prepared, you can evaluate the performance of the LLaMA-7b model on the MMLU and C-Eval datasets using the following command:

```bash
python run.py --models hf_llama_7b --datasets mmlu_ppl ceval_ppl
```

OpenCompass has predefined configurations for many models and datasets. You can list all available model and dataset configurations using the [tools](./docs/en/tools.md#list-configs).

```bash
# List all configurations
python tools/list_configs.py
# List all configurations related to llama and mmlu
python tools/list_configs.py llama mmlu
```

You can also evaluate other HuggingFace models via command line. Taking LLaMA-7b as an example:

```bash
python run.py --datasets ceval_ppl mmlu_ppl \
--hf-path huggyllama/llama-7b \  # HuggingFace model path
--model-kwargs device_map='auto' \  # Arguments for model construction
--tokenizer-kwargs padding_side='left' truncation='left' use_fast=False \  # Arguments for tokenizer construction
--max-out-len 100 \  # Maximum number of tokens generated
--max-seq-len 2048 \  # Maximum sequence length the model can accept
--batch-size 8 \  # Batch size
--no-batch-padding \  # Don't enable batch padding, infer through for loop to avoid performance loss
Tong Gao's avatar
Tong Gao committed
118
--num-gpus 1  # Number of minimum required GPUs
Leymore's avatar
Leymore committed
119
120
```

Tong Gao's avatar
Tong Gao committed
121
122
123
> **Note**<br />
> To run the command above, you will need to remove the comments starting from `# ` first.

124
Through the command line or configuration files, OpenCompass also supports evaluating APIs or custom models, as well as more diversified evaluation strategies. Please read the [Quick Start](https://opencompass.readthedocs.io/en/latest/get_started/quick_start.html) to learn how to run an evaluation task.
Leymore's avatar
Leymore committed
125
126
127

<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
128
## 📖 Dataset Support
Tong Gao's avatar
Tong Gao committed
129
130
131
132
133
134
135
136
137
138
139
140
141
142

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Language</b>
      </td>
      <td>
        <b>Knowledge</b>
      </td>
      <td>
        <b>Reasoning</b>
      </td>
      <td>
Leymore's avatar
Leymore committed
143
        <b>Examination</b>
Tong Gao's avatar
Tong Gao committed
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
      </td>
    </tr>
    <tr valign="top">
      <td>
<details open>
<summary><b>Word Definition</b></summary>

- WiC
- SummEdits

</details>

<details open>
<summary><b>Idiom Learning</b></summary>

- CHID

</details>

<details open>
<summary><b>Semantic Similarity</b></summary>

- AFQMC
- BUSTM

</details>

<details open>
<summary><b>Coreference Resolution</b></summary>

- CLUEWSC
- WSC
- WinoGrande

</details>

<details open>
<summary><b>Translation</b></summary>

- Flores
Leymore's avatar
Leymore committed
184
- IWSLT2017
Tong Gao's avatar
Tong Gao committed
185
186

</details>
Leymore's avatar
Leymore committed
187

Tong Gao's avatar
Tong Gao committed
188
<details open>
Leymore's avatar
Leymore committed
189
<summary><b>Multi-language Question Answering</b></summary>
Tong Gao's avatar
Tong Gao committed
190

Leymore's avatar
Leymore committed
191
192
- TyDi-QA
- XCOPA
Tong Gao's avatar
Tong Gao committed
193
194
195
196

</details>

<details open>
Leymore's avatar
Leymore committed
197
<summary><b>Multi-language Summary</b></summary>
Tong Gao's avatar
Tong Gao committed
198

Leymore's avatar
Leymore committed
199
200
201
202
203
204
205
206
207
208
209
210
- XLSum

</details>
      </td>
      <td>
<details open>
<summary><b>Knowledge Question Answering</b></summary>

- BoolQ
- CommonSenseQA
- NaturalQuestions
- TriviaQA
Tong Gao's avatar
Tong Gao committed
211
212
213
214
215
216
217
218
219
220
221
222
223
224

</details>
      </td>
      <td>
<details open>
<summary><b>Textual Entailment</b></summary>

- CMNLI
- OCNLI
- OCNLI_FC
- AX-b
- AX-g
- CB
- RTE
Leymore's avatar
Leymore committed
225
- ANLI
Tong Gao's avatar
Tong Gao committed
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252

</details>

<details open>
<summary><b>Commonsense Reasoning</b></summary>

- StoryCloze
- COPA
- ReCoRD
- HellaSwag
- PIQA
- SIQA

</details>

<details open>
<summary><b>Mathematical Reasoning</b></summary>

- MATH
- GSM8K

</details>

<details open>
<summary><b>Theorem Application</b></summary>

- TheoremQA
Leymore's avatar
Leymore committed
253
254
- StrategyQA
- SciBench
Tong Gao's avatar
Tong Gao committed
255
256
257
258
259
260
261
262
263
264
265
266
267
268

</details>

<details open>
<summary><b>Comprehensive Reasoning</b></summary>

- BBH

</details>
      </td>
      <td>
<details open>
<summary><b>Junior High, High School, University, Professional Examinations</b></summary>

Leymore's avatar
Leymore committed
269
- C-Eval
Tong Gao's avatar
Tong Gao committed
270
271
272
- AGIEval
- MMLU
- GAOKAO-Bench
273
- CMMLU
Tong Gao's avatar
Tong Gao committed
274
- ARC
Leymore's avatar
Leymore committed
275
- Xiezhi
Tong Gao's avatar
Tong Gao committed
276
277

</details>
Leymore's avatar
Leymore committed
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293

<details open>
<summary><b>Medical Examinations</b></summary>

- CMB

</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Understanding</b>
Tong Gao's avatar
Tong Gao committed
294
295
      </td>
      <td>
Leymore's avatar
Leymore committed
296
297
298
299
300
301
302
303
304
305
306
        <b>Long Context</b>
      </td>
      <td>
        <b>Safety</b>
      </td>
      <td>
        <b>Code</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
Tong Gao's avatar
Tong Gao committed
307
308
309
310
311
312
313
314
<details open>
<summary><b>Reading Comprehension</b></summary>

- C3
- CMRC
- DRCD
- MultiRC
- RACE
Leymore's avatar
Leymore committed
315
316
317
- DROP
- OpenBookQA
- SQuAD2.0
Tong Gao's avatar
Tong Gao committed
318
319
320
321
322
323
324
325
326

</details>

<details open>
<summary><b>Content Summary</b></summary>

- CSL
- LCSTS
- XSum
Leymore's avatar
Leymore committed
327
- SummScreen
Tong Gao's avatar
Tong Gao committed
328
329
330
331
332
333
334
335
336
337

</details>

<details open>
<summary><b>Content Analysis</b></summary>

- EPRSTMT
- LAMBADA
- TNEWS

Leymore's avatar
Leymore committed
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
</details>
      </td>
      <td>
<details open>
<summary><b>Long Context Understanding</b></summary>

- LEval
- LongBench
- GovReports
- NarrativeQA
- Qasper

</details>
      </td>
      <td>
<details open>
<summary><b>Safety</b></summary>

- CivilComments
- CrowsPairs
- CValues
- JigsawMultilingual
- TruthfulQA

</details>
<details open>
<summary><b>Robustness</b></summary>

- AdvGLUE

</details>
      </td>
      <td>
<details open>
<summary><b>Code</b></summary>

- HumanEval
- HumanEvalX
- MBPP
- APPs
- DS1000

Tong Gao's avatar
Tong Gao committed
380
381
382
383
384
385
386
387
</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
388
389
390
<p align="right"><a href="#top">🔝Back to top</a></p>

## 📖 Model Support
Tong Gao's avatar
Tong Gao committed
391
392
393
394
395

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
Songyang Zhang's avatar
Songyang Zhang committed
396
        <b>Open-source Models</b>
Tong Gao's avatar
Tong Gao committed
397
398
399
400
      </td>
      <td>
        <b>API Models</b>
      </td>
Songyang Zhang's avatar
Songyang Zhang committed
401
      <!-- <td>
Tong Gao's avatar
Tong Gao committed
402
        <b>Custom Models</b>
Songyang Zhang's avatar
Songyang Zhang committed
403
      </td> -->
Tong Gao's avatar
Tong Gao committed
404
405
406
    </tr>
    <tr valign="top">
      <td>
Hubert's avatar
Hubert committed
407

Tong Gao's avatar
Tong Gao committed
408
409
410
411
412
413
- InternLM
- LLaMA
- Vicuna
- Alpaca
- Baichuan
- WizardLM
Leymore's avatar
Leymore committed
414
- ChatGLM2
Tong Gao's avatar
Tong Gao committed
415
416
- Falcon
- TigerBot
Leymore's avatar
Leymore committed
417
- Qwen
gaotongxiao's avatar
gaotongxiao committed
418
- ...
Tong Gao's avatar
Tong Gao committed
419
420
421
422

</td>
<td>

Songyang Zhang's avatar
Songyang Zhang committed
423
- OpenAI
Leymore's avatar
Leymore committed
424
- Claude
Tong Gao's avatar
Tong Gao committed
425
426
427
428
429
430
431
432
433
- PaLM (coming soon)
- ……

</td>

</tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
434
435
<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
## 🔜 Roadmap

- [ ] Subjective Evaluation
  - [ ] Release CompassAreana
  - [ ] Subjective evaluation dataset.
- [ ] Long-context
  - [ ] Long-context evaluation with extensive datasets.
  - [ ] Long-context leaderboard.
- [ ] Coding
  - [ ] Coding evaluation leaderdboard.
  - [ ] Non-python language evaluation service.
- [ ] Agent
  - [ ] Support various agenet framework.
  - [ ] Evaluation of tool use of the LLMs.
- [ ] Robustness
  - [ ] Support various attack method

453
454
455
456
## 👷‍♂️ Contributing

We appreciate all contributions to improve OpenCompass. Please refer to the [contributing guideline](https://opencompass.readthedocs.io/en/latest/notes/contribution_guide.html) for the best practice.

Songyang Zhang's avatar
Songyang Zhang committed
457
## 🤝 Acknowledgements
Tong Gao's avatar
Tong Gao committed
458
459
460

Some code in this project is cited and modified from [OpenICL](https://github.com/Shark-NLP/OpenICL).

Zaida Zhou's avatar
Zaida Zhou committed
461
Some datasets and prompt implementations are modified from [chain-of-thought-hub](https://github.com/FranxYao/chain-of-thought-hub) and [instruct-eval](https://github.com/declare-lab/instruct-eval).
Leymore's avatar
Leymore committed
462

Songyang Zhang's avatar
Songyang Zhang committed
463
## 🖊️ Citation
Tong Gao's avatar
Tong Gao committed
464
465
466
467
468

```bibtex
@misc{2023opencompass,
    title={OpenCompass: A Universal Evaluation Platform for Foundation Models},
    author={OpenCompass Contributors},
Songyang Zhang's avatar
Songyang Zhang committed
469
    howpublished = {\url{https://github.com/open-compass/opencompass}},
Tong Gao's avatar
Tong Gao committed
470
471
472
    year={2023}
}
```
Songyang Zhang's avatar
Songyang Zhang committed
473
474

<p align="right"><a href="#top">🔝Back to top</a></p>