README.md 17 KB
Newer Older
Tong Gao's avatar
Tong Gao committed
1
2
3
4
5
<div align="center">
  <img src="docs/en/_static/image/logo.svg" width="500px"/>
  <br />
  <br />

Hubert's avatar
Hubert committed
6
[![docs](https://readthedocs.org/projects/opencompass/badge)](https://opencompass.readthedocs.io/en)
Songyang Zhang's avatar
Songyang Zhang committed
7
[![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](https://github.com/open-compass/opencompass/blob/main/LICENSE)
Tong Gao's avatar
Tong Gao committed
8

Hubert's avatar
Hubert committed
9
<!-- [![PyPI](https://badge.fury.io/py/opencompass.svg)](https://pypi.org/project/opencompass/) -->
Tong Gao's avatar
Tong Gao committed
10

gaotongxiao's avatar
gaotongxiao committed
11
[🌐Website](https://opencompass.org.cn/) |
Tong Gao's avatar
Tong Gao committed
12
[📘Documentation](https://opencompass.readthedocs.io/en/latest/) |
13
[🛠️Installation](https://opencompass.readthedocs.io/en/latest/get_started/installation.html) |
Songyang Zhang's avatar
Songyang Zhang committed
14
[🤔Reporting Issues](https://github.com/open-compass/opencompass/issues/new/choose)
Tong Gao's avatar
Tong Gao committed
15
16
17
18
19

English | [简体中文](README_zh-CN.md)

</div>

20
<p align="center">
21
    👋 join us on <a href="https://discord.gg/KKwfEbFj7U" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=opencompass" target="_blank">WeChat</a>
22
23
</p>

Songyang Zhang's avatar
Songyang Zhang committed
24
25
26
27
28
29
30
31
32
33
34
35
## 📣 OpenCompass 2023 LLM Annual Leaderboard

We are honored to have witnessed the tremendous progress of artificial general intelligence together with the community in the past year, and we are also very pleased that **OpenCompass** can help numerous developers and users.

We announce the launch of the **OpenCompass 2023 LLM Annual Leaderboard** plan. We expect to release the annual leaderboard of the LLMs in January 2024, systematically evaluating the performance of LLMs in various capabilities such as language, knowledge, reasoning, creation, long-text, and agents.

At that time, we will release rankings for both open-source models and commercial API models, aiming to provide a comprehensive, objective, and neutral reference for the industry and research community.

We sincerely invite various large models to join the OpenCompass to showcase their performance advantages in different fields. At the same time, we also welcome researchers and developers to provide valuable suggestions and contributions to jointly promote the development of the LLMs. If you have any questions or needs, please feel free to [contact us](mailto:opencompass@pjlab.org.cn). In addition, relevant evaluation contents, performance statistics, and evaluation methods will be open-source along with the leaderboard release.

Let's look forward to the release of the OpenCompass 2023 LLM Annual Leaderboard!

Songyang Zhang's avatar
Songyang Zhang committed
36
37
38
## 🧭	Welcome

to **OpenCompass**!
Tong Gao's avatar
Tong Gao committed
39
40
41

Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models.

Songyang Zhang's avatar
Songyang Zhang committed
42
43
🚩🚩🚩 Explore opportunities at OpenCompass! We're currently **hiring full-time researchers/engineers and interns**. If you're passionate about LLM and OpenCompass, don't hesitate to reach out to us via [email](mailto:zhangsongyang@pjlab.org.cn). We'd love to hear from you!

44
45
46
🔥🔥🔥 We are delighted to announce that **the OpenCompass has been recommended by the Meta AI**, click [Get Started](https://ai.meta.com/llama/get-started/#validation) of Llama for more information.

> **Attention**<br />
Sakshi Umredkar's avatar
Sakshi Umredkar committed
47
> We launch the OpenCompass Collaboration project, welcome to support diverse evaluation benchmarks into OpenCompass!
Songyang Zhang's avatar
Songyang Zhang committed
48
> Clike [Issue](https://github.com/open-compass/opencompass/issues/248) for more information.
Songyang Zhang's avatar
Songyang Zhang committed
49
50
> Let's work together to build a more powerful OpenCompass toolkit!

Songyang Zhang's avatar
Songyang Zhang committed
51
## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
52

53
- **\[2023.12.28\]** We have enabled seamless evaluation of all models developed using [LLaMA2-Accessory](https://github.com/Alpha-VLLM/LLaMA2-Accessory), a powerful toolkit for comprehensive LLM development. 🔥🔥🔥.
54
- **\[2023.12.22\]** We have released [T-Eval](https://github.com/open-compass/T-Eval), a step-by-step evaluation benchmark to gauge your LLMs on tool utilization. Welcome to our [Leaderboard](https://open-compass.github.io/T-Eval/leaderboard.html) for more details! 🔥🔥🔥.
Haodong Duan's avatar
Haodong Duan committed
55
- **\[2023.12.10\]** We have released [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), a toolkit for evaluating vision-language models (VLMs), currently support 20+ VLMs and 7 multi-modal benchmarks (including MMBench series). 🔥🔥🔥.
56
- **\[2023.12.10\]** We have supported Mistral AI's MoE LLM: **Mixtral-8x7B-32K**. Welcome to [MixtralKit](https://github.com/open-compass/MixtralKit) for more details about inference and evaluation. 🔥🔥🔥.
57
58
- **\[2023.11.22\]** We have supported many API-based models, include **Baidu, ByteDance, Huawei, 360**. Welcome to [Models](https://opencompass.readthedocs.io/en/latest/user_guides/models.html) section for more details. 🔥🔥🔥.
- **\[2023.11.20\]** Thanks [helloyongyang](https://github.com/helloyongyang) for supporting the evaluation with [LightLLM](https://github.com/ModelTC/lightllm) as backent. Welcome to [Evaluation With LightLLM](https://opencompass.readthedocs.io/en/latest/advanced_guides/evaluation_lightllm.html) for more details. 🔥🔥🔥.
Songyang Zhang's avatar
Songyang Zhang committed
59
- **\[2023.11.13\]** We are delighted to announce the release of OpenCompass v0.1.8. This version enables local loading of evaluation benchmarks, thereby eliminating the need for an internet connection. Please note that with this update, **you must re-download all evaluation datasets** to ensure accurate and up-to-date results.🔥🔥🔥.
60
61
62
- **\[2023.11.06\]** We have supported several API-based models, include  **ChatGLM Pro@Zhipu, ABAB-Chat@MiniMax and Xunfei**. Welcome to [Models](https://opencompass.readthedocs.io/en/latest/user_guides/models.html) section for more details. 🔥🔥🔥.
- **\[2023.10.24\]** We release a new benchmark for evaluating LLMs’ capabilities of having multi-turn dialogues. Welcome to [BotChat](https://github.com/open-compass/BotChat) for more details.
- **\[2023.09.26\]** We update the leaderboard with [Qwen](https://github.com/QwenLM/Qwen), one of the best-performing open-source models currently available, welcome to our [homepage](https://opencompass.org.cn) for more details.
Songyang Zhang's avatar
Songyang Zhang committed
63
- **\[2023.09.20\]** We update the leaderboard with [InternLM-20B](https://github.com/InternLM/InternLM), welcome to our [homepage](https://opencompass.org.cn) for more details.
Leymore's avatar
Leymore committed
64
65
- **\[2023.09.19\]** We update the leaderboard with WeMix-LLaMA2-70B/Phi-1.5-1.3B, welcome to our [homepage](https://opencompass.org.cn) for more details.
- **\[2023.09.18\]** We have released [long context evaluation guidance](docs/en/advanced_guides/longeval.md).
Songyang Zhang's avatar
Songyang Zhang committed
66
67

> [More](docs/en/notes/news.md)
Yuan Liu's avatar
Yuan Liu committed
68

Songyang Zhang's avatar
Songyang Zhang committed
69
## ✨ Introduction
Tong Gao's avatar
Tong Gao committed
70

71
72
![image](https://github.com/open-compass/opencompass/assets/22607038/f45fe125-4aed-4f8c-8fe8-df4efb41a8ea)

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
73
OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Its main features include:
Tong Gao's avatar
Tong Gao committed
74

Leymore's avatar
Leymore committed
75
- **Comprehensive support for models and datasets**: Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 70+ datasets with about 400,000 questions, comprehensively evaluating the capabilities of the models in five dimensions.
Tong Gao's avatar
Tong Gao committed
76
77
78

- **Efficient distributed evaluation**: One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours.

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
79
- **Diversified evaluation paradigms**: Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue-type prompt templates, to easily stimulate the maximum performance of various models.
Tong Gao's avatar
Tong Gao committed
80
81
82

- **Modular design with high extensibility**: Want to add new models or datasets, customize an advanced task division strategy, or even support a new cluster management system? Everything about OpenCompass can be easily expanded!

ayushrakesh's avatar
ayushrakesh committed
83
- **Experiment management and reporting mechanism**: Use config files to fully record each experiment, and support real-time reporting of results.
Tong Gao's avatar
Tong Gao committed
84

Songyang Zhang's avatar
Songyang Zhang committed
85
## 📊 Leaderboard
Tong Gao's avatar
Tong Gao committed
86

87
We provide [OpenCompass Leaderboard](https://opencompass.org.cn/rank) for the community to rank all public models and API models. If you would like to join the evaluation, please provide the model repository URL or a standard API interface to the email address `opencompass@pjlab.org.cn`.
Tong Gao's avatar
Tong Gao committed
88

Songyang Zhang's avatar
Songyang Zhang committed
89
<p align="right"><a href="#top">🔝Back to top</a></p>
Tong Gao's avatar
Tong Gao committed
90

Leymore's avatar
Leymore committed
91
92
93
94
## 🛠️ Installation

Below are the steps for quick installation and datasets preparation.

95
96
97
98
99
### 💻 Environment Setup

#### Open-source Models with GPU

```bash
Leymore's avatar
Leymore committed
100
101
102
103
104
conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
```

#### API Models with CPU-only

```bash
conda create -n opencompass python=3.10 pytorch torchvision torchaudio cpuonly -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
# also please install requiresments packages via `pip install -r requirements/api.txt` for API models if needed.
```

### 📂 Data Preparation

```bash
Leymore's avatar
Leymore committed
121
# Download dataset to data/ folder
122
123
wget https://github.com/open-compass/opencompass/releases/download/0.1.8.rc1/OpenCompassData-core-20231110.zip
unzip OpenCompassData-core-20231110.zip
Leymore's avatar
Leymore committed
124
125
```

126
Some third-party features, like Humaneval and Llama, may require additional steps to work properly, for detailed steps please refer to the [Installation Guide](https://opencompass.readthedocs.io/en/latest/get_started/installation.html).
Leymore's avatar
Leymore committed
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157

<p align="right"><a href="#top">🔝Back to top</a></p>

## 🏗️ ️Evaluation

After ensuring that OpenCompass is installed correctly according to the above steps and the datasets are prepared, you can evaluate the performance of the LLaMA-7b model on the MMLU and C-Eval datasets using the following command:

```bash
python run.py --models hf_llama_7b --datasets mmlu_ppl ceval_ppl
```

OpenCompass has predefined configurations for many models and datasets. You can list all available model and dataset configurations using the [tools](./docs/en/tools.md#list-configs).

```bash
# List all configurations
python tools/list_configs.py
# List all configurations related to llama and mmlu
python tools/list_configs.py llama mmlu
```

You can also evaluate other HuggingFace models via command line. Taking LLaMA-7b as an example:

```bash
python run.py --datasets ceval_ppl mmlu_ppl \
--hf-path huggyllama/llama-7b \  # HuggingFace model path
--model-kwargs device_map='auto' \  # Arguments for model construction
--tokenizer-kwargs padding_side='left' truncation='left' use_fast=False \  # Arguments for tokenizer construction
--max-out-len 100 \  # Maximum number of tokens generated
--max-seq-len 2048 \  # Maximum sequence length the model can accept
--batch-size 8 \  # Batch size
--no-batch-padding \  # Don't enable batch padding, infer through for loop to avoid performance loss
Tong Gao's avatar
Tong Gao committed
158
--num-gpus 1  # Number of minimum required GPUs
Leymore's avatar
Leymore committed
159
160
```

Tong Gao's avatar
Tong Gao committed
161
162
163
> **Note**<br />
> To run the command above, you will need to remove the comments starting from `# ` first.

164
Through the command line or configuration files, OpenCompass also supports evaluating APIs or custom models, as well as more diversified evaluation strategies. Please read the [Quick Start](https://opencompass.readthedocs.io/en/latest/get_started/quick_start.html) to learn how to run an evaluation task.
Leymore's avatar
Leymore committed
165
166
167

<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
168
## 📖 Dataset Support
Tong Gao's avatar
Tong Gao committed
169
170
171
172
173
174
175
176
177
178
179
180
181
182

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Language</b>
      </td>
      <td>
        <b>Knowledge</b>
      </td>
      <td>
        <b>Reasoning</b>
      </td>
      <td>
Leymore's avatar
Leymore committed
183
        <b>Examination</b>
Tong Gao's avatar
Tong Gao committed
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
      </td>
    </tr>
    <tr valign="top">
      <td>
<details open>
<summary><b>Word Definition</b></summary>

- WiC
- SummEdits

</details>

<details open>
<summary><b>Idiom Learning</b></summary>

- CHID

</details>

<details open>
<summary><b>Semantic Similarity</b></summary>

- AFQMC
- BUSTM

</details>

<details open>
<summary><b>Coreference Resolution</b></summary>

- CLUEWSC
- WSC
- WinoGrande

</details>

<details open>
<summary><b>Translation</b></summary>

- Flores
Leymore's avatar
Leymore committed
224
- IWSLT2017
Tong Gao's avatar
Tong Gao committed
225
226

</details>
Leymore's avatar
Leymore committed
227

Tong Gao's avatar
Tong Gao committed
228
<details open>
Leymore's avatar
Leymore committed
229
<summary><b>Multi-language Question Answering</b></summary>
Tong Gao's avatar
Tong Gao committed
230

Leymore's avatar
Leymore committed
231
232
- TyDi-QA
- XCOPA
Tong Gao's avatar
Tong Gao committed
233
234
235
236

</details>

<details open>
Leymore's avatar
Leymore committed
237
<summary><b>Multi-language Summary</b></summary>
Tong Gao's avatar
Tong Gao committed
238

Leymore's avatar
Leymore committed
239
240
241
242
243
244
245
246
247
248
249
250
- XLSum

</details>
      </td>
      <td>
<details open>
<summary><b>Knowledge Question Answering</b></summary>

- BoolQ
- CommonSenseQA
- NaturalQuestions
- TriviaQA
Tong Gao's avatar
Tong Gao committed
251
252
253
254
255
256
257
258
259
260
261
262
263
264

</details>
      </td>
      <td>
<details open>
<summary><b>Textual Entailment</b></summary>

- CMNLI
- OCNLI
- OCNLI_FC
- AX-b
- AX-g
- CB
- RTE
Leymore's avatar
Leymore committed
265
- ANLI
Tong Gao's avatar
Tong Gao committed
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292

</details>

<details open>
<summary><b>Commonsense Reasoning</b></summary>

- StoryCloze
- COPA
- ReCoRD
- HellaSwag
- PIQA
- SIQA

</details>

<details open>
<summary><b>Mathematical Reasoning</b></summary>

- MATH
- GSM8K

</details>

<details open>
<summary><b>Theorem Application</b></summary>

- TheoremQA
Leymore's avatar
Leymore committed
293
294
- StrategyQA
- SciBench
Tong Gao's avatar
Tong Gao committed
295
296
297
298
299
300
301
302
303
304
305
306
307
308

</details>

<details open>
<summary><b>Comprehensive Reasoning</b></summary>

- BBH

</details>
      </td>
      <td>
<details open>
<summary><b>Junior High, High School, University, Professional Examinations</b></summary>

Leymore's avatar
Leymore committed
309
- C-Eval
Tong Gao's avatar
Tong Gao committed
310
311
312
- AGIEval
- MMLU
- GAOKAO-Bench
313
- CMMLU
Tong Gao's avatar
Tong Gao committed
314
- ARC
Leymore's avatar
Leymore committed
315
- Xiezhi
Tong Gao's avatar
Tong Gao committed
316
317

</details>
Leymore's avatar
Leymore committed
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333

<details open>
<summary><b>Medical Examinations</b></summary>

- CMB

</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Understanding</b>
Tong Gao's avatar
Tong Gao committed
334
335
      </td>
      <td>
Leymore's avatar
Leymore committed
336
337
338
339
340
341
342
343
344
345
346
        <b>Long Context</b>
      </td>
      <td>
        <b>Safety</b>
      </td>
      <td>
        <b>Code</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
Tong Gao's avatar
Tong Gao committed
347
348
349
350
351
352
353
354
<details open>
<summary><b>Reading Comprehension</b></summary>

- C3
- CMRC
- DRCD
- MultiRC
- RACE
Leymore's avatar
Leymore committed
355
356
357
- DROP
- OpenBookQA
- SQuAD2.0
Tong Gao's avatar
Tong Gao committed
358
359
360
361
362
363
364
365
366

</details>

<details open>
<summary><b>Content Summary</b></summary>

- CSL
- LCSTS
- XSum
Leymore's avatar
Leymore committed
367
- SummScreen
Tong Gao's avatar
Tong Gao committed
368
369
370
371
372
373
374
375
376
377

</details>

<details open>
<summary><b>Content Analysis</b></summary>

- EPRSTMT
- LAMBADA
- TNEWS

Leymore's avatar
Leymore committed
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
</details>
      </td>
      <td>
<details open>
<summary><b>Long Context Understanding</b></summary>

- LEval
- LongBench
- GovReports
- NarrativeQA
- Qasper

</details>
      </td>
      <td>
<details open>
<summary><b>Safety</b></summary>

- CivilComments
- CrowsPairs
- CValues
- JigsawMultilingual
- TruthfulQA

</details>
<details open>
<summary><b>Robustness</b></summary>

- AdvGLUE

</details>
      </td>
      <td>
<details open>
<summary><b>Code</b></summary>

- HumanEval
- HumanEvalX
- MBPP
- APPs
- DS1000

Tong Gao's avatar
Tong Gao committed
420
421
422
423
424
425
426
427
</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
</table>

428
429
## OpenCompass Ecosystem

Songyang Zhang's avatar
Songyang Zhang committed
430
431
432
<p align="right"><a href="#top">🔝Back to top</a></p>

## 📖 Model Support
Tong Gao's avatar
Tong Gao committed
433
434
435
436
437

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
Songyang Zhang's avatar
Songyang Zhang committed
438
        <b>Open-source Models</b>
Tong Gao's avatar
Tong Gao committed
439
440
441
442
      </td>
      <td>
        <b>API Models</b>
      </td>
Songyang Zhang's avatar
Songyang Zhang committed
443
      <!-- <td>
Tong Gao's avatar
Tong Gao committed
444
        <b>Custom Models</b>
Songyang Zhang's avatar
Songyang Zhang committed
445
      </td> -->
Tong Gao's avatar
Tong Gao committed
446
447
448
    </tr>
    <tr valign="top">
      <td>
Hubert's avatar
Hubert committed
449

450
451
452
453
454
455
456
457
458
459
460
- [InternLM](https://github.com/InternLM/InternLM)
- [LLaMA](https://github.com/facebookresearch/llama)
- [Vicuna](https://github.com/lm-sys/FastChat)
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baichuan](https://github.com/baichuan-inc)
- [WizardLM](https://github.com/nlpxucan/WizardLM)
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
- [TigerBot](https://github.com/TigerResearch/TigerBot)
- [Qwen](https://github.com/QwenLM/Qwen)
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
gaotongxiao's avatar
gaotongxiao committed
461
- ...
Tong Gao's avatar
Tong Gao committed
462
463
464
465

</td>
<td>

Songyang Zhang's avatar
Songyang Zhang committed
466
- OpenAI
Leymore's avatar
Leymore committed
467
- Claude
468
469
470
471
472
473
474
475
476
- ZhipuAI(ChatGLM)
- Baichuan
- ByteDance(YunQue)
- Huawei(PanGu)
- 360
- Baidu(ERNIEBot)
- MiniMax(ABAB-Chat)
- SenseTime(nova)
- Xunfei(Spark)
Tong Gao's avatar
Tong Gao committed
477
478
479
480
481
482
483
484
- ……

</td>

</tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
485
486
<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
487
488
489
490
491
## 🔜 Roadmap

- [ ] Subjective Evaluation
  - [ ] Release CompassAreana
  - [ ] Subjective evaluation dataset.
492
- [x] Long-context
Songyang Zhang's avatar
Songyang Zhang committed
493
494
495
  - [ ] Long-context evaluation with extensive datasets.
  - [ ] Long-context leaderboard.
- [ ] Coding
Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
496
  - [ ] Coding evaluation leaderboard.
497
  - [x] Non-python language evaluation service.
Songyang Zhang's avatar
Songyang Zhang committed
498
499
500
- [ ] Agent
  - [ ] Support various agenet framework.
  - [ ] Evaluation of tool use of the LLMs.
501
502
- [x] Robustness
  - [x] Support various attack method
Songyang Zhang's avatar
Songyang Zhang committed
503

504
505
## 👷‍♂️ Contributing

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
506
We appreciate all contributions to improving OpenCompass. Please refer to the [contributing guideline](https://opencompass.readthedocs.io/en/latest/notes/contribution_guide.html) for the best practice.
507

Songyang Zhang's avatar
Songyang Zhang committed
508
## 🤝 Acknowledgements
Tong Gao's avatar
Tong Gao committed
509
510
511

Some code in this project is cited and modified from [OpenICL](https://github.com/Shark-NLP/OpenICL).

Zaida Zhou's avatar
Zaida Zhou committed
512
Some datasets and prompt implementations are modified from [chain-of-thought-hub](https://github.com/FranxYao/chain-of-thought-hub) and [instruct-eval](https://github.com/declare-lab/instruct-eval).
Leymore's avatar
Leymore committed
513

Songyang Zhang's avatar
Songyang Zhang committed
514
## 🖊️ Citation
Tong Gao's avatar
Tong Gao committed
515
516
517
518
519

```bibtex
@misc{2023opencompass,
    title={OpenCompass: A Universal Evaluation Platform for Foundation Models},
    author={OpenCompass Contributors},
Songyang Zhang's avatar
Songyang Zhang committed
520
    howpublished = {\url{https://github.com/open-compass/opencompass}},
Tong Gao's avatar
Tong Gao committed
521
522
523
    year={2023}
}
```
Songyang Zhang's avatar
Songyang Zhang committed
524
525

<p align="right"><a href="#top">🔝Back to top</a></p>