README.md 18.1 KB
Newer Older
Tong Gao's avatar
Tong Gao committed
1
2
3
4
5
<div align="center">
  <img src="docs/en/_static/image/logo.svg" width="500px"/>
  <br />
  <br />

Songyang Zhang's avatar
Songyang Zhang committed
6
7
8
9
10
11
12
[![][github-release-shield]][github-release-link]
[![][github-releasedate-shield]][github-releasedate-link]
[![][github-contributors-shield]][github-contributors-link]<br>
[![][github-forks-shield]][github-forks-link]
[![][github-stars-shield]][github-stars-link]
[![][github-issues-shield]][github-issues-link]
[![][github-license-shield]][github-license-link]
Tong Gao's avatar
Tong Gao committed
13

Hubert's avatar
Hubert committed
14
<!-- [![PyPI](https://badge.fury.io/py/opencompass.svg)](https://pypi.org/project/opencompass/) -->
Tong Gao's avatar
Tong Gao committed
15

gaotongxiao's avatar
gaotongxiao committed
16
[🌐Website](https://opencompass.org.cn/) |
Songyang Zhang's avatar
Songyang Zhang committed
17
18
[📖CompassHub](https://hub.opencompass.org.cn/home) |
[📊CompassRank](https://rank.opencompass.org.cn/home) |
Tong Gao's avatar
Tong Gao committed
19
[📘Documentation](https://opencompass.readthedocs.io/en/latest/) |
20
[🛠️Installation](https://opencompass.readthedocs.io/en/latest/get_started/installation.html) |
Songyang Zhang's avatar
Songyang Zhang committed
21
[🤔Reporting Issues](https://github.com/open-compass/opencompass/issues/new/choose)
Tong Gao's avatar
Tong Gao committed
22
23
24

English | [简体中文](README_zh-CN.md)

Songyang Zhang's avatar
Songyang Zhang committed
25
26
[![][github-trending-shield]][github-trending-url]

Tong Gao's avatar
Tong Gao committed
27
28
</div>

29
<p align="center">
30
    👋 join us on <a href="https://discord.gg/KKwfEbFj7U" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=opencompass" target="_blank">WeChat</a>
31
32
</p>

Songyang Zhang's avatar
Songyang Zhang committed
33
34
35
36
> \[!IMPORTANT\]
>
> **Star Us**, You will receive all release notifications from GitHub without any delay ~ ⭐️

Songyang Zhang's avatar
Songyang Zhang committed
37
## 📣 OpenCompass 2.0
Songyang Zhang's avatar
Songyang Zhang committed
38

Songyang Zhang's avatar
Songyang Zhang committed
39
40
We are thrilled to introduce OpenCompass 2.0, an advanced suite featuring three key components: [CompassKit](https://github.com/open-compass), [CompassHub](https://hub.opencompass.org.cn/home), and [CompassRank](https://rank.opencompass.org.cn/home).
![oc20](https://github.com/tonysy/opencompass/assets/7881589/90dbe1c0-c323-470a-991e-2b37ab5350b2)
Songyang Zhang's avatar
Songyang Zhang committed
41

Songyang Zhang's avatar
Songyang Zhang committed
42
**CompassRank** has been significantly enhanced into the leaderboards that now incorporates both open-source benchmarks and proprietary benchmarks. This upgrade allows for a more comprehensive evaluation of models across the industry.
Songyang Zhang's avatar
Songyang Zhang committed
43

Songyang Zhang's avatar
Songyang Zhang committed
44
**CompassHub** presents a pioneering benchmark browser interface, designed to simplify and expedite the exploration and utilization of an extensive array of benchmarks for researchers and practitioners alike. To enhance the visibility of your own benchmark within the community, we warmly invite you to contribute it to CompassHub. You may initiate the submission process by clicking [here](https://hub.opencompass.org.cn/dataset-submit).
Songyang Zhang's avatar
Songyang Zhang committed
45

Songyang Zhang's avatar
Songyang Zhang committed
46
**CompassKit** is a powerful collection of evaluation toolkits specifically tailored for Large Language Models and Large Vision-language Models. It provides an extensive set of tools to assess and measure the performance of these complex models effectively. Welcome to try our toolkits for in your research and products.
Songyang Zhang's avatar
Songyang Zhang committed
47

Songyang Zhang's avatar
Songyang Zhang committed
48
49
50
51
52
53
54
55
<details>
  <summary><kbd>Star History</kbd></summary>
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=open-compass%2Fopencompass&theme=dark&type=Date">
    <img width="100%" src="https://api.star-history.com/svg?repos=open-compass%2Fopencompass&type=Date">
  </picture>
</details>

Songyang Zhang's avatar
Songyang Zhang committed
56
57
58
## 🧭	Welcome

to **OpenCompass**!
Tong Gao's avatar
Tong Gao committed
59
60
61

Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models.

Songyang Zhang's avatar
Songyang Zhang committed
62
63
🚩🚩🚩 Explore opportunities at OpenCompass! We're currently **hiring full-time researchers/engineers and interns**. If you're passionate about LLM and OpenCompass, don't hesitate to reach out to us via [email](mailto:zhangsongyang@pjlab.org.cn). We'd love to hear from you!

64
65
66
🔥🔥🔥 We are delighted to announce that **the OpenCompass has been recommended by the Meta AI**, click [Get Started](https://ai.meta.com/llama/get-started/#validation) of Llama for more information.

> **Attention**<br />
Sakshi Umredkar's avatar
Sakshi Umredkar committed
67
> We launch the OpenCompass Collaboration project, welcome to support diverse evaluation benchmarks into OpenCompass!
Songyang Zhang's avatar
Songyang Zhang committed
68
> Clike [Issue](https://github.com/open-compass/opencompass/issues/248) for more information.
Songyang Zhang's avatar
Songyang Zhang committed
69
70
> Let's work together to build a more powerful OpenCompass toolkit!

Songyang Zhang's avatar
Songyang Zhang committed
71
## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
72

73
- **\[2024.04.29\]** We report the performance of several famous LLMs on the common benchmarks, welcome to [documentation](https://opencompass.readthedocs.io/en/latest/user_guides/corebench.html) for more information! 🔥🔥🔥.
74
- **\[2024.04.26\]** We deprecated the multi-madality evaluating function from OpenCompass, related implement has moved to [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), welcome to use! 🔥🔥🔥.
75
- **\[2024.04.26\]** We supported the evaluation of [ArenaHard](configs/eval_subjective_arena_hard.py)  welcome to try!🔥🔥🔥.
76
77
78
- **\[2024.04.22\]** We supported the evaluation of [LLaMA3](configs/models/hf_llama/hf_llama3_8b.py)[LLaMA3-Instruct](configs/models/hf_llama/hf_llama3_8b_instruct.py), welcome to try! 🔥🔥🔥
- **\[2024.02.29\]** We supported the MT-Bench, AlpacalEval and AlignBench, more information can be found [here](https://opencompass.readthedocs.io/en/latest/advanced_guides/subjective_evaluation.html)
- **\[2024.01.30\]** We release OpenCompass 2.0. Click  [CompassKit](https://github.com/open-compass), [CompassHub](https://hub.opencompass.org.cn/home), and [CompassRank](https://rank.opencompass.org.cn/home) for more information !
Songyang Zhang's avatar
Songyang Zhang committed
79
80

> [More](docs/en/notes/news.md)
Yuan Liu's avatar
Yuan Liu committed
81

Songyang Zhang's avatar
Songyang Zhang committed
82
## ✨ Introduction
Tong Gao's avatar
Tong Gao committed
83

84
85
![image](https://github.com/open-compass/opencompass/assets/22607038/f45fe125-4aed-4f8c-8fe8-df4efb41a8ea)

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
86
OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Its main features include:
Tong Gao's avatar
Tong Gao committed
87

Leymore's avatar
Leymore committed
88
- **Comprehensive support for models and datasets**: Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 70+ datasets with about 400,000 questions, comprehensively evaluating the capabilities of the models in five dimensions.
Tong Gao's avatar
Tong Gao committed
89
90
91

- **Efficient distributed evaluation**: One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours.

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
92
- **Diversified evaluation paradigms**: Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue-type prompt templates, to easily stimulate the maximum performance of various models.
Tong Gao's avatar
Tong Gao committed
93
94
95

- **Modular design with high extensibility**: Want to add new models or datasets, customize an advanced task division strategy, or even support a new cluster management system? Everything about OpenCompass can be easily expanded!

ayushrakesh's avatar
ayushrakesh committed
96
- **Experiment management and reporting mechanism**: Use config files to fully record each experiment, and support real-time reporting of results.
Tong Gao's avatar
Tong Gao committed
97

Songyang Zhang's avatar
Songyang Zhang committed
98
## 📊 Leaderboard
Tong Gao's avatar
Tong Gao committed
99

fanqiNO1's avatar
fanqiNO1 committed
100
We provide [OpenCompass Leaderboard](https://rank.opencompass.org.cn/home) for the community to rank all public models and API models. If you would like to join the evaluation, please provide the model repository URL or a standard API interface to the email address `opencompass@pjlab.org.cn`.
Tong Gao's avatar
Tong Gao committed
101

Songyang Zhang's avatar
Songyang Zhang committed
102
<p align="right"><a href="#top">🔝Back to top</a></p>
Tong Gao's avatar
Tong Gao committed
103

Leymore's avatar
Leymore committed
104
105
106
107
## 🛠️ Installation

Below are the steps for quick installation and datasets preparation.

108
109
110
111
112
### 💻 Environment Setup

#### Open-source Models with GPU

```bash
Leymore's avatar
Leymore committed
113
114
115
116
117
conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
118
119
120
121
122
123
124
125
126
127
```

#### API Models with CPU-only

```bash
conda create -n opencompass python=3.10 pytorch torchvision torchaudio cpuonly -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
128
# also please install requirements packages via `pip install -r requirements/api.txt` for API models if needed.
129
130
131
132
133
```

### 📂 Data Preparation

```bash
Leymore's avatar
Leymore committed
134
# Download dataset to data/ folder
135
136
wget https://github.com/open-compass/opencompass/releases/download/0.2.2.rc1/OpenCompassData-core-20240207.zip
unzip OpenCompassData-core-20240207.zip
Leymore's avatar
Leymore committed
137
138
```

139
Some third-party features, like Humaneval and Llama, may require additional steps to work properly, for detailed steps please refer to the [Installation Guide](https://opencompass.readthedocs.io/en/latest/get_started/installation.html).
Leymore's avatar
Leymore committed
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170

<p align="right"><a href="#top">🔝Back to top</a></p>

## 🏗️ ️Evaluation

After ensuring that OpenCompass is installed correctly according to the above steps and the datasets are prepared, you can evaluate the performance of the LLaMA-7b model on the MMLU and C-Eval datasets using the following command:

```bash
python run.py --models hf_llama_7b --datasets mmlu_ppl ceval_ppl
```

OpenCompass has predefined configurations for many models and datasets. You can list all available model and dataset configurations using the [tools](./docs/en/tools.md#list-configs).

```bash
# List all configurations
python tools/list_configs.py
# List all configurations related to llama and mmlu
python tools/list_configs.py llama mmlu
```

You can also evaluate other HuggingFace models via command line. Taking LLaMA-7b as an example:

```bash
python run.py --datasets ceval_ppl mmlu_ppl \
--hf-path huggyllama/llama-7b \  # HuggingFace model path
--model-kwargs device_map='auto' \  # Arguments for model construction
--tokenizer-kwargs padding_side='left' truncation='left' use_fast=False \  # Arguments for tokenizer construction
--max-out-len 100 \  # Maximum number of tokens generated
--max-seq-len 2048 \  # Maximum sequence length the model can accept
--batch-size 8 \  # Batch size
--no-batch-padding \  # Don't enable batch padding, infer through for loop to avoid performance loss
Tong Gao's avatar
Tong Gao committed
171
--num-gpus 1  # Number of minimum required GPUs
Leymore's avatar
Leymore committed
172
173
```

174
175
> \[!TIP\]
>
Tong Gao's avatar
Tong Gao committed
176
> To run the command above, you will need to remove the comments starting from `# ` first.
177
178
> configuration with `_ppl` is designed for base model typically.
> configuration with `_gen` can be used for both base model and chat model.
Tong Gao's avatar
Tong Gao committed
179

180
Through the command line or configuration files, OpenCompass also supports evaluating APIs or custom models, as well as more diversified evaluation strategies. Please read the [Quick Start](https://opencompass.readthedocs.io/en/latest/get_started/quick_start.html) to learn how to run an evaluation task.
Leymore's avatar
Leymore committed
181
182
183

<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
184
## 📖 Dataset Support
Tong Gao's avatar
Tong Gao committed
185
186
187
188
189
190
191
192
193
194
195
196
197
198

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Language</b>
      </td>
      <td>
        <b>Knowledge</b>
      </td>
      <td>
        <b>Reasoning</b>
      </td>
      <td>
Leymore's avatar
Leymore committed
199
        <b>Examination</b>
Tong Gao's avatar
Tong Gao committed
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
      </td>
    </tr>
    <tr valign="top">
      <td>
<details open>
<summary><b>Word Definition</b></summary>

- WiC
- SummEdits

</details>

<details open>
<summary><b>Idiom Learning</b></summary>

- CHID

</details>

<details open>
<summary><b>Semantic Similarity</b></summary>

- AFQMC
- BUSTM

</details>

<details open>
<summary><b>Coreference Resolution</b></summary>

- CLUEWSC
- WSC
- WinoGrande

</details>

<details open>
<summary><b>Translation</b></summary>

- Flores
Leymore's avatar
Leymore committed
240
- IWSLT2017
Tong Gao's avatar
Tong Gao committed
241
242

</details>
Leymore's avatar
Leymore committed
243

Tong Gao's avatar
Tong Gao committed
244
<details open>
Leymore's avatar
Leymore committed
245
<summary><b>Multi-language Question Answering</b></summary>
Tong Gao's avatar
Tong Gao committed
246

Leymore's avatar
Leymore committed
247
248
- TyDi-QA
- XCOPA
Tong Gao's avatar
Tong Gao committed
249
250
251
252

</details>

<details open>
Leymore's avatar
Leymore committed
253
<summary><b>Multi-language Summary</b></summary>
Tong Gao's avatar
Tong Gao committed
254

Leymore's avatar
Leymore committed
255
256
257
258
259
260
261
262
263
264
265
266
- XLSum

</details>
      </td>
      <td>
<details open>
<summary><b>Knowledge Question Answering</b></summary>

- BoolQ
- CommonSenseQA
- NaturalQuestions
- TriviaQA
Tong Gao's avatar
Tong Gao committed
267
268
269
270
271
272
273
274
275
276
277
278
279
280

</details>
      </td>
      <td>
<details open>
<summary><b>Textual Entailment</b></summary>

- CMNLI
- OCNLI
- OCNLI_FC
- AX-b
- AX-g
- CB
- RTE
Leymore's avatar
Leymore committed
281
- ANLI
Tong Gao's avatar
Tong Gao committed
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308

</details>

<details open>
<summary><b>Commonsense Reasoning</b></summary>

- StoryCloze
- COPA
- ReCoRD
- HellaSwag
- PIQA
- SIQA

</details>

<details open>
<summary><b>Mathematical Reasoning</b></summary>

- MATH
- GSM8K

</details>

<details open>
<summary><b>Theorem Application</b></summary>

- TheoremQA
Leymore's avatar
Leymore committed
309
310
- StrategyQA
- SciBench
Tong Gao's avatar
Tong Gao committed
311
312
313
314
315
316
317
318
319
320
321
322
323
324

</details>

<details open>
<summary><b>Comprehensive Reasoning</b></summary>

- BBH

</details>
      </td>
      <td>
<details open>
<summary><b>Junior High, High School, University, Professional Examinations</b></summary>

Leymore's avatar
Leymore committed
325
- C-Eval
Tong Gao's avatar
Tong Gao committed
326
327
328
- AGIEval
- MMLU
- GAOKAO-Bench
329
- CMMLU
Tong Gao's avatar
Tong Gao committed
330
- ARC
Leymore's avatar
Leymore committed
331
- Xiezhi
Tong Gao's avatar
Tong Gao committed
332
333

</details>
Leymore's avatar
Leymore committed
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349

<details open>
<summary><b>Medical Examinations</b></summary>

- CMB

</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Understanding</b>
Tong Gao's avatar
Tong Gao committed
350
351
      </td>
      <td>
Leymore's avatar
Leymore committed
352
353
354
355
356
357
358
359
360
361
362
        <b>Long Context</b>
      </td>
      <td>
        <b>Safety</b>
      </td>
      <td>
        <b>Code</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
Tong Gao's avatar
Tong Gao committed
363
364
365
366
367
368
369
370
<details open>
<summary><b>Reading Comprehension</b></summary>

- C3
- CMRC
- DRCD
- MultiRC
- RACE
Leymore's avatar
Leymore committed
371
372
373
- DROP
- OpenBookQA
- SQuAD2.0
Tong Gao's avatar
Tong Gao committed
374
375
376
377
378
379
380
381
382

</details>

<details open>
<summary><b>Content Summary</b></summary>

- CSL
- LCSTS
- XSum
Leymore's avatar
Leymore committed
383
- SummScreen
Tong Gao's avatar
Tong Gao committed
384
385
386
387
388
389
390
391
392
393

</details>

<details open>
<summary><b>Content Analysis</b></summary>

- EPRSTMT
- LAMBADA
- TNEWS

Leymore's avatar
Leymore committed
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
</details>
      </td>
      <td>
<details open>
<summary><b>Long Context Understanding</b></summary>

- LEval
- LongBench
- GovReports
- NarrativeQA
- Qasper

</details>
      </td>
      <td>
<details open>
<summary><b>Safety</b></summary>

- CivilComments
- CrowsPairs
- CValues
- JigsawMultilingual
- TruthfulQA

</details>
<details open>
<summary><b>Robustness</b></summary>

- AdvGLUE

</details>
      </td>
      <td>
<details open>
<summary><b>Code</b></summary>

- HumanEval
- HumanEvalX
- MBPP
- APPs
- DS1000

Tong Gao's avatar
Tong Gao committed
436
437
438
439
440
441
442
443
</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
444
## 📖 Model Support
Tong Gao's avatar
Tong Gao committed
445
446
447
448
449

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
Songyang Zhang's avatar
Songyang Zhang committed
450
        <b>Open-source Models</b>
Tong Gao's avatar
Tong Gao committed
451
452
453
454
      </td>
      <td>
        <b>API Models</b>
      </td>
Songyang Zhang's avatar
Songyang Zhang committed
455
      <!-- <td>
Tong Gao's avatar
Tong Gao committed
456
        <b>Custom Models</b>
Songyang Zhang's avatar
Songyang Zhang committed
457
      </td> -->
Tong Gao's avatar
Tong Gao committed
458
459
460
    </tr>
    <tr valign="top">
      <td>
Hubert's avatar
Hubert committed
461

462
463
- [InternLM](https://github.com/InternLM/InternLM)
- [LLaMA](https://github.com/facebookresearch/llama)
464
- [LLaMA3](https://github.com/meta-llama/llama3)
465
466
467
468
469
470
471
472
473
- [Vicuna](https://github.com/lm-sys/FastChat)
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baichuan](https://github.com/baichuan-inc)
- [WizardLM](https://github.com/nlpxucan/WizardLM)
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
- [TigerBot](https://github.com/TigerResearch/TigerBot)
- [Qwen](https://github.com/QwenLM/Qwen)
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
Songyang Zhang's avatar
Songyang Zhang committed
474
- [Gemma](https://huggingface.co/google/gemma-7b)
gaotongxiao's avatar
gaotongxiao committed
475
- ...
Tong Gao's avatar
Tong Gao committed
476
477
478
479

</td>
<td>

Songyang Zhang's avatar
Songyang Zhang committed
480
- OpenAI
Songyang Zhang's avatar
Songyang Zhang committed
481
- Gemini
Leymore's avatar
Leymore committed
482
- Claude
483
484
485
486
487
488
489
490
491
- ZhipuAI(ChatGLM)
- Baichuan
- ByteDance(YunQue)
- Huawei(PanGu)
- 360
- Baidu(ERNIEBot)
- MiniMax(ABAB-Chat)
- SenseTime(nova)
- Xunfei(Spark)
Tong Gao's avatar
Tong Gao committed
492
493
494
495
496
497
498
499
- ……

</td>

</tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
500
501
<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
502
503
## 🔜 Roadmap

Songyang Zhang's avatar
Songyang Zhang committed
504
- [x] Subjective Evaluation
Songyang Zhang's avatar
Songyang Zhang committed
505
  - [ ] Release CompassAreana
Songyang Zhang's avatar
Songyang Zhang committed
506
  - [x] Subjective evaluation.
507
- [x] Long-context
Songyang Zhang's avatar
Songyang Zhang committed
508
  - [x] Long-context evaluation with extensive datasets.
Songyang Zhang's avatar
Songyang Zhang committed
509
  - [ ] Long-context leaderboard.
Songyang Zhang's avatar
Songyang Zhang committed
510
- [x] Coding
Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
511
  - [ ] Coding evaluation leaderboard.
512
  - [x] Non-python language evaluation service.
Songyang Zhang's avatar
Songyang Zhang committed
513
- [x] Agent
Songyang Zhang's avatar
Songyang Zhang committed
514
  - [ ] Support various agenet framework.
Songyang Zhang's avatar
Songyang Zhang committed
515
  - [x] Evaluation of tool use of the LLMs.
516
517
- [x] Robustness
  - [x] Support various attack method
Songyang Zhang's avatar
Songyang Zhang committed
518

519
520
## 👷‍♂️ Contributing

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
521
We appreciate all contributions to improving OpenCompass. Please refer to the [contributing guideline](https://opencompass.readthedocs.io/en/latest/notes/contribution_guide.html) for the best practice.
522

Songyang Zhang's avatar
Songyang Zhang committed
523
524
525
526
527
528
529
530
531
532
533
534
535
536
<!-- Copy-paste in your Readme.md file -->

<!-- Made with [OSS Insight](https://ossinsight.io/) -->

<a href="https://github.com/open-compass/opencompass/graphs/contributors" target="_blank">
  <table>
    <tr>
      <th colspan="2">
        <br><img src="https://contrib.rocks/image?repo=open-compass/opencompass"><br><br>
      </th>
    </tr>
  </table>
</a>

Songyang Zhang's avatar
Songyang Zhang committed
537
## 🤝 Acknowledgements
Tong Gao's avatar
Tong Gao committed
538
539
540

Some code in this project is cited and modified from [OpenICL](https://github.com/Shark-NLP/OpenICL).

Zaida Zhou's avatar
Zaida Zhou committed
541
Some datasets and prompt implementations are modified from [chain-of-thought-hub](https://github.com/FranxYao/chain-of-thought-hub) and [instruct-eval](https://github.com/declare-lab/instruct-eval).
Leymore's avatar
Leymore committed
542

Songyang Zhang's avatar
Songyang Zhang committed
543
## 🖊️ Citation
Tong Gao's avatar
Tong Gao committed
544
545
546
547
548

```bibtex
@misc{2023opencompass,
    title={OpenCompass: A Universal Evaluation Platform for Foundation Models},
    author={OpenCompass Contributors},
Songyang Zhang's avatar
Songyang Zhang committed
549
    howpublished = {\url{https://github.com/open-compass/opencompass}},
Tong Gao's avatar
Tong Gao committed
550
551
552
    year={2023}
}
```
Songyang Zhang's avatar
Songyang Zhang committed
553
554

<p align="right"><a href="#top">🔝Back to top</a></p>
Songyang Zhang's avatar
Songyang Zhang committed
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571

[github-contributors-link]: https://github.com/open-compass/opencompass/graphs/contributors
[github-contributors-shield]: https://img.shields.io/github/contributors/open-compass/opencompass?color=c4f042&labelColor=black&style=flat-square
[github-forks-link]: https://github.com/open-compass/opencompass/network/members
[github-forks-shield]: https://img.shields.io/github/forks/open-compass/opencompass?color=8ae8ff&labelColor=black&style=flat-square
[github-issues-link]: https://github.com/open-compass/opencompass/issues
[github-issues-shield]: https://img.shields.io/github/issues/open-compass/opencompass?color=ff80eb&labelColor=black&style=flat-square
[github-license-link]: https://github.com/open-compass/opencompass/blob/main/LICENSE
[github-license-shield]: https://img.shields.io/github/license/open-compass/opencompass?color=white&labelColor=black&style=flat-square
[github-release-link]: https://github.com/open-compass/opencompass/releases
[github-release-shield]: https://img.shields.io/github/v/release/open-compass/opencompass?color=369eff&labelColor=black&logo=github&style=flat-square
[github-releasedate-link]: https://github.com/open-compass/opencompass/releases
[github-releasedate-shield]: https://img.shields.io/github/release-date/open-compass/opencompass?labelColor=black&style=flat-square
[github-stars-link]: https://github.com/open-compass/opencompass/stargazers
[github-stars-shield]: https://img.shields.io/github/stars/open-compass/opencompass?color=ffcb47&labelColor=black&style=flat-square
[github-trending-shield]: https://trendshift.io/api/badge/repositories/6630
[github-trending-url]: https://trendshift.io/repositories/6630