README.md 16.5 KB
Newer Older
Tong Gao's avatar
Tong Gao committed
1
2
3
4
5
<div align="center">
  <img src="docs/en/_static/image/logo.svg" width="500px"/>
  <br />
  <br />

Hubert's avatar
Hubert committed
6
[![docs](https://readthedocs.org/projects/opencompass/badge)](https://opencompass.readthedocs.io/en)
Songyang Zhang's avatar
Songyang Zhang committed
7
[![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](https://github.com/open-compass/opencompass/blob/main/LICENSE)
Tong Gao's avatar
Tong Gao committed
8

Hubert's avatar
Hubert committed
9
<!-- [![PyPI](https://badge.fury.io/py/opencompass.svg)](https://pypi.org/project/opencompass/) -->
Tong Gao's avatar
Tong Gao committed
10

gaotongxiao's avatar
gaotongxiao committed
11
[🌐Website](https://opencompass.org.cn/) |
Tong Gao's avatar
Tong Gao committed
12
[📘Documentation](https://opencompass.readthedocs.io/en/latest/) |
13
[🛠️Installation](https://opencompass.readthedocs.io/en/latest/get_started/installation.html) |
Songyang Zhang's avatar
Songyang Zhang committed
14
[🤔Reporting Issues](https://github.com/open-compass/opencompass/issues/new/choose)
Tong Gao's avatar
Tong Gao committed
15
16
17
18
19

English | [简体中文](README_zh-CN.md)

</div>

20
<p align="center">
21
    👋 join us on <a href="https://discord.gg/KKwfEbFj7U" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=opencompass" target="_blank">WeChat</a>
22
23
</p>

Songyang Zhang's avatar
Songyang Zhang committed
24
25
26
27
28
29
30
31
32
33
34
35
## 📣 OpenCompass 2023 LLM Annual Leaderboard

We are honored to have witnessed the tremendous progress of artificial general intelligence together with the community in the past year, and we are also very pleased that **OpenCompass** can help numerous developers and users.

We announce the launch of the **OpenCompass 2023 LLM Annual Leaderboard** plan. We expect to release the annual leaderboard of the LLMs in January 2024, systematically evaluating the performance of LLMs in various capabilities such as language, knowledge, reasoning, creation, long-text, and agents.

At that time, we will release rankings for both open-source models and commercial API models, aiming to provide a comprehensive, objective, and neutral reference for the industry and research community.

We sincerely invite various large models to join the OpenCompass to showcase their performance advantages in different fields. At the same time, we also welcome researchers and developers to provide valuable suggestions and contributions to jointly promote the development of the LLMs. If you have any questions or needs, please feel free to [contact us](mailto:opencompass@pjlab.org.cn). In addition, relevant evaluation contents, performance statistics, and evaluation methods will be open-source along with the leaderboard release.

Let's look forward to the release of the OpenCompass 2023 LLM Annual Leaderboard!

Songyang Zhang's avatar
Songyang Zhang committed
36
37
38
## 🧭	Welcome

to **OpenCompass**!
Tong Gao's avatar
Tong Gao committed
39
40
41

Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models.

Songyang Zhang's avatar
Songyang Zhang committed
42
43
🚩🚩🚩 Explore opportunities at OpenCompass! We're currently **hiring full-time researchers/engineers and interns**. If you're passionate about LLM and OpenCompass, don't hesitate to reach out to us via [email](mailto:zhangsongyang@pjlab.org.cn). We'd love to hear from you!

44
45
46
🔥🔥🔥 We are delighted to announce that **the OpenCompass has been recommended by the Meta AI**, click [Get Started](https://ai.meta.com/llama/get-started/#validation) of Llama for more information.

> **Attention**<br />
Sakshi Umredkar's avatar
Sakshi Umredkar committed
47
> We launch the OpenCompass Collaboration project, welcome to support diverse evaluation benchmarks into OpenCompass!
Songyang Zhang's avatar
Songyang Zhang committed
48
> Clike [Issue](https://github.com/open-compass/opencompass/issues/248) for more information.
Songyang Zhang's avatar
Songyang Zhang committed
49
50
> Let's work together to build a more powerful OpenCompass toolkit!

Songyang Zhang's avatar
Songyang Zhang committed
51
## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
52

Haodong Duan's avatar
Haodong Duan committed
53
- **\[2023.12.10\]** We have released [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), a toolkit for evaluating vision-language models (VLMs), currently support 20+ VLMs and 7 multi-modal benchmarks (including MMBench series). 🔥🔥🔥.
54
- **\[2023.12.10\]** We have supported Mistral AI's MoE LLM: **Mixtral-8x7B-32K**. Welcome to [MixtralKit](https://github.com/open-compass/MixtralKit) for more details about inference and evaluation. 🔥🔥🔥.
55
56
- **\[2023.11.22\]** We have supported many API-based models, include **Baidu, ByteDance, Huawei, 360**. Welcome to [Models](https://opencompass.readthedocs.io/en/latest/user_guides/models.html) section for more details. 🔥🔥🔥.
- **\[2023.11.20\]** Thanks [helloyongyang](https://github.com/helloyongyang) for supporting the evaluation with [LightLLM](https://github.com/ModelTC/lightllm) as backent. Welcome to [Evaluation With LightLLM](https://opencompass.readthedocs.io/en/latest/advanced_guides/evaluation_lightllm.html) for more details. 🔥🔥🔥.
Songyang Zhang's avatar
Songyang Zhang committed
57
- **\[2023.11.13\]** We are delighted to announce the release of OpenCompass v0.1.8. This version enables local loading of evaluation benchmarks, thereby eliminating the need for an internet connection. Please note that with this update, **you must re-download all evaluation datasets** to ensure accurate and up-to-date results.🔥🔥🔥.
58
59
60
- **\[2023.11.06\]** We have supported several API-based models, include  **ChatGLM Pro@Zhipu, ABAB-Chat@MiniMax and Xunfei**. Welcome to [Models](https://opencompass.readthedocs.io/en/latest/user_guides/models.html) section for more details. 🔥🔥🔥.
- **\[2023.10.24\]** We release a new benchmark for evaluating LLMs’ capabilities of having multi-turn dialogues. Welcome to [BotChat](https://github.com/open-compass/BotChat) for more details.
- **\[2023.09.26\]** We update the leaderboard with [Qwen](https://github.com/QwenLM/Qwen), one of the best-performing open-source models currently available, welcome to our [homepage](https://opencompass.org.cn) for more details.
Songyang Zhang's avatar
Songyang Zhang committed
61
- **\[2023.09.20\]** We update the leaderboard with [InternLM-20B](https://github.com/InternLM/InternLM), welcome to our [homepage](https://opencompass.org.cn) for more details.
Leymore's avatar
Leymore committed
62
63
- **\[2023.09.19\]** We update the leaderboard with WeMix-LLaMA2-70B/Phi-1.5-1.3B, welcome to our [homepage](https://opencompass.org.cn) for more details.
- **\[2023.09.18\]** We have released [long context evaluation guidance](docs/en/advanced_guides/longeval.md).
Songyang Zhang's avatar
Songyang Zhang committed
64
65

> [More](docs/en/notes/news.md)
Yuan Liu's avatar
Yuan Liu committed
66

Songyang Zhang's avatar
Songyang Zhang committed
67
## ✨ Introduction
Tong Gao's avatar
Tong Gao committed
68

69
70
![image](https://github.com/open-compass/opencompass/assets/22607038/f45fe125-4aed-4f8c-8fe8-df4efb41a8ea)

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
71
OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Its main features include:
Tong Gao's avatar
Tong Gao committed
72

Leymore's avatar
Leymore committed
73
- **Comprehensive support for models and datasets**: Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 70+ datasets with about 400,000 questions, comprehensively evaluating the capabilities of the models in five dimensions.
Tong Gao's avatar
Tong Gao committed
74
75
76

- **Efficient distributed evaluation**: One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours.

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
77
- **Diversified evaluation paradigms**: Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue-type prompt templates, to easily stimulate the maximum performance of various models.
Tong Gao's avatar
Tong Gao committed
78
79
80

- **Modular design with high extensibility**: Want to add new models or datasets, customize an advanced task division strategy, or even support a new cluster management system? Everything about OpenCompass can be easily expanded!

ayushrakesh's avatar
ayushrakesh committed
81
- **Experiment management and reporting mechanism**: Use config files to fully record each experiment, and support real-time reporting of results.
Tong Gao's avatar
Tong Gao committed
82

Songyang Zhang's avatar
Songyang Zhang committed
83
## 📊 Leaderboard
Tong Gao's avatar
Tong Gao committed
84

85
We provide [OpenCompass Leaderboard](https://opencompass.org.cn/rank) for the community to rank all public models and API models. If you would like to join the evaluation, please provide the model repository URL or a standard API interface to the email address `opencompass@pjlab.org.cn`.
Tong Gao's avatar
Tong Gao committed
86

Songyang Zhang's avatar
Songyang Zhang committed
87
<p align="right"><a href="#top">🔝Back to top</a></p>
Tong Gao's avatar
Tong Gao committed
88

Leymore's avatar
Leymore committed
89
90
91
92
## 🛠️ Installation

Below are the steps for quick installation and datasets preparation.

93
94
95
96
97
### 💻 Environment Setup

#### Open-source Models with GPU

```bash
Leymore's avatar
Leymore committed
98
99
100
101
102
conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
```

#### API Models with CPU-only

```bash
conda create -n opencompass python=3.10 pytorch torchvision torchaudio cpuonly -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
# also please install requiresments packages via `pip install -r requirements/api.txt` for API models if needed.
```

### 📂 Data Preparation

```bash
Leymore's avatar
Leymore committed
119
# Download dataset to data/ folder
120
121
wget https://github.com/open-compass/opencompass/releases/download/0.1.8.rc1/OpenCompassData-core-20231110.zip
unzip OpenCompassData-core-20231110.zip
Leymore's avatar
Leymore committed
122
123
```

124
Some third-party features, like Humaneval and Llama, may require additional steps to work properly, for detailed steps please refer to the [Installation Guide](https://opencompass.readthedocs.io/en/latest/get_started/installation.html).
Leymore's avatar
Leymore committed
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155

<p align="right"><a href="#top">🔝Back to top</a></p>

## 🏗️ ️Evaluation

After ensuring that OpenCompass is installed correctly according to the above steps and the datasets are prepared, you can evaluate the performance of the LLaMA-7b model on the MMLU and C-Eval datasets using the following command:

```bash
python run.py --models hf_llama_7b --datasets mmlu_ppl ceval_ppl
```

OpenCompass has predefined configurations for many models and datasets. You can list all available model and dataset configurations using the [tools](./docs/en/tools.md#list-configs).

```bash
# List all configurations
python tools/list_configs.py
# List all configurations related to llama and mmlu
python tools/list_configs.py llama mmlu
```

You can also evaluate other HuggingFace models via command line. Taking LLaMA-7b as an example:

```bash
python run.py --datasets ceval_ppl mmlu_ppl \
--hf-path huggyllama/llama-7b \  # HuggingFace model path
--model-kwargs device_map='auto' \  # Arguments for model construction
--tokenizer-kwargs padding_side='left' truncation='left' use_fast=False \  # Arguments for tokenizer construction
--max-out-len 100 \  # Maximum number of tokens generated
--max-seq-len 2048 \  # Maximum sequence length the model can accept
--batch-size 8 \  # Batch size
--no-batch-padding \  # Don't enable batch padding, infer through for loop to avoid performance loss
Tong Gao's avatar
Tong Gao committed
156
--num-gpus 1  # Number of minimum required GPUs
Leymore's avatar
Leymore committed
157
158
```

Tong Gao's avatar
Tong Gao committed
159
160
161
> **Note**<br />
> To run the command above, you will need to remove the comments starting from `# ` first.

162
Through the command line or configuration files, OpenCompass also supports evaluating APIs or custom models, as well as more diversified evaluation strategies. Please read the [Quick Start](https://opencompass.readthedocs.io/en/latest/get_started/quick_start.html) to learn how to run an evaluation task.
Leymore's avatar
Leymore committed
163
164
165

<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
166
## 📖 Dataset Support
Tong Gao's avatar
Tong Gao committed
167
168
169
170
171
172
173
174
175
176
177
178
179
180

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Language</b>
      </td>
      <td>
        <b>Knowledge</b>
      </td>
      <td>
        <b>Reasoning</b>
      </td>
      <td>
Leymore's avatar
Leymore committed
181
        <b>Examination</b>
Tong Gao's avatar
Tong Gao committed
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
      </td>
    </tr>
    <tr valign="top">
      <td>
<details open>
<summary><b>Word Definition</b></summary>

- WiC
- SummEdits

</details>

<details open>
<summary><b>Idiom Learning</b></summary>

- CHID

</details>

<details open>
<summary><b>Semantic Similarity</b></summary>

- AFQMC
- BUSTM

</details>

<details open>
<summary><b>Coreference Resolution</b></summary>

- CLUEWSC
- WSC
- WinoGrande

</details>

<details open>
<summary><b>Translation</b></summary>

- Flores
Leymore's avatar
Leymore committed
222
- IWSLT2017
Tong Gao's avatar
Tong Gao committed
223
224

</details>
Leymore's avatar
Leymore committed
225

Tong Gao's avatar
Tong Gao committed
226
<details open>
Leymore's avatar
Leymore committed
227
<summary><b>Multi-language Question Answering</b></summary>
Tong Gao's avatar
Tong Gao committed
228

Leymore's avatar
Leymore committed
229
230
- TyDi-QA
- XCOPA
Tong Gao's avatar
Tong Gao committed
231
232
233
234

</details>

<details open>
Leymore's avatar
Leymore committed
235
<summary><b>Multi-language Summary</b></summary>
Tong Gao's avatar
Tong Gao committed
236

Leymore's avatar
Leymore committed
237
238
239
240
241
242
243
244
245
246
247
248
- XLSum

</details>
      </td>
      <td>
<details open>
<summary><b>Knowledge Question Answering</b></summary>

- BoolQ
- CommonSenseQA
- NaturalQuestions
- TriviaQA
Tong Gao's avatar
Tong Gao committed
249
250
251
252
253
254
255
256
257
258
259
260
261
262

</details>
      </td>
      <td>
<details open>
<summary><b>Textual Entailment</b></summary>

- CMNLI
- OCNLI
- OCNLI_FC
- AX-b
- AX-g
- CB
- RTE
Leymore's avatar
Leymore committed
263
- ANLI
Tong Gao's avatar
Tong Gao committed
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290

</details>

<details open>
<summary><b>Commonsense Reasoning</b></summary>

- StoryCloze
- COPA
- ReCoRD
- HellaSwag
- PIQA
- SIQA

</details>

<details open>
<summary><b>Mathematical Reasoning</b></summary>

- MATH
- GSM8K

</details>

<details open>
<summary><b>Theorem Application</b></summary>

- TheoremQA
Leymore's avatar
Leymore committed
291
292
- StrategyQA
- SciBench
Tong Gao's avatar
Tong Gao committed
293
294
295
296
297
298
299
300
301
302
303
304
305
306

</details>

<details open>
<summary><b>Comprehensive Reasoning</b></summary>

- BBH

</details>
      </td>
      <td>
<details open>
<summary><b>Junior High, High School, University, Professional Examinations</b></summary>

Leymore's avatar
Leymore committed
307
- C-Eval
Tong Gao's avatar
Tong Gao committed
308
309
310
- AGIEval
- MMLU
- GAOKAO-Bench
311
- CMMLU
Tong Gao's avatar
Tong Gao committed
312
- ARC
Leymore's avatar
Leymore committed
313
- Xiezhi
Tong Gao's avatar
Tong Gao committed
314
315

</details>
Leymore's avatar
Leymore committed
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331

<details open>
<summary><b>Medical Examinations</b></summary>

- CMB

</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Understanding</b>
Tong Gao's avatar
Tong Gao committed
332
333
      </td>
      <td>
Leymore's avatar
Leymore committed
334
335
336
337
338
339
340
341
342
343
344
        <b>Long Context</b>
      </td>
      <td>
        <b>Safety</b>
      </td>
      <td>
        <b>Code</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
Tong Gao's avatar
Tong Gao committed
345
346
347
348
349
350
351
352
<details open>
<summary><b>Reading Comprehension</b></summary>

- C3
- CMRC
- DRCD
- MultiRC
- RACE
Leymore's avatar
Leymore committed
353
354
355
- DROP
- OpenBookQA
- SQuAD2.0
Tong Gao's avatar
Tong Gao committed
356
357
358
359
360
361
362
363
364

</details>

<details open>
<summary><b>Content Summary</b></summary>

- CSL
- LCSTS
- XSum
Leymore's avatar
Leymore committed
365
- SummScreen
Tong Gao's avatar
Tong Gao committed
366
367
368
369
370
371
372
373
374
375

</details>

<details open>
<summary><b>Content Analysis</b></summary>

- EPRSTMT
- LAMBADA
- TNEWS

Leymore's avatar
Leymore committed
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
</details>
      </td>
      <td>
<details open>
<summary><b>Long Context Understanding</b></summary>

- LEval
- LongBench
- GovReports
- NarrativeQA
- Qasper

</details>
      </td>
      <td>
<details open>
<summary><b>Safety</b></summary>

- CivilComments
- CrowsPairs
- CValues
- JigsawMultilingual
- TruthfulQA

</details>
<details open>
<summary><b>Robustness</b></summary>

- AdvGLUE

</details>
      </td>
      <td>
<details open>
<summary><b>Code</b></summary>

- HumanEval
- HumanEvalX
- MBPP
- APPs
- DS1000

Tong Gao's avatar
Tong Gao committed
418
419
420
421
422
423
424
425
</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
</table>

426
427
## OpenCompass Ecosystem

Songyang Zhang's avatar
Songyang Zhang committed
428
429
430
<p align="right"><a href="#top">🔝Back to top</a></p>

## 📖 Model Support
Tong Gao's avatar
Tong Gao committed
431
432
433
434
435

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
Songyang Zhang's avatar
Songyang Zhang committed
436
        <b>Open-source Models</b>
Tong Gao's avatar
Tong Gao committed
437
438
439
440
      </td>
      <td>
        <b>API Models</b>
      </td>
Songyang Zhang's avatar
Songyang Zhang committed
441
      <!-- <td>
Tong Gao's avatar
Tong Gao committed
442
        <b>Custom Models</b>
Songyang Zhang's avatar
Songyang Zhang committed
443
      </td> -->
Tong Gao's avatar
Tong Gao committed
444
445
446
    </tr>
    <tr valign="top">
      <td>
Hubert's avatar
Hubert committed
447

448
449
450
451
452
453
454
455
456
457
458
- [InternLM](https://github.com/InternLM/InternLM)
- [LLaMA](https://github.com/facebookresearch/llama)
- [Vicuna](https://github.com/lm-sys/FastChat)
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baichuan](https://github.com/baichuan-inc)
- [WizardLM](https://github.com/nlpxucan/WizardLM)
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
- [TigerBot](https://github.com/TigerResearch/TigerBot)
- [Qwen](https://github.com/QwenLM/Qwen)
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
gaotongxiao's avatar
gaotongxiao committed
459
- ...
Tong Gao's avatar
Tong Gao committed
460
461
462
463

</td>
<td>

Songyang Zhang's avatar
Songyang Zhang committed
464
- OpenAI
Leymore's avatar
Leymore committed
465
- Claude
466
467
468
469
470
471
472
473
474
- ZhipuAI(ChatGLM)
- Baichuan
- ByteDance(YunQue)
- Huawei(PanGu)
- 360
- Baidu(ERNIEBot)
- MiniMax(ABAB-Chat)
- SenseTime(nova)
- Xunfei(Spark)
Tong Gao's avatar
Tong Gao committed
475
476
477
478
479
480
481
482
- ……

</td>

</tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
483
484
<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
485
486
487
488
489
## 🔜 Roadmap

- [ ] Subjective Evaluation
  - [ ] Release CompassAreana
  - [ ] Subjective evaluation dataset.
490
- [x] Long-context
Songyang Zhang's avatar
Songyang Zhang committed
491
492
493
  - [ ] Long-context evaluation with extensive datasets.
  - [ ] Long-context leaderboard.
- [ ] Coding
Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
494
  - [ ] Coding evaluation leaderboard.
495
  - [x] Non-python language evaluation service.
Songyang Zhang's avatar
Songyang Zhang committed
496
497
498
- [ ] Agent
  - [ ] Support various agenet framework.
  - [ ] Evaluation of tool use of the LLMs.
499
500
- [x] Robustness
  - [x] Support various attack method
Songyang Zhang's avatar
Songyang Zhang committed
501

502
503
## 👷‍♂️ Contributing

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
504
We appreciate all contributions to improving OpenCompass. Please refer to the [contributing guideline](https://opencompass.readthedocs.io/en/latest/notes/contribution_guide.html) for the best practice.
505

Songyang Zhang's avatar
Songyang Zhang committed
506
## 🤝 Acknowledgements
Tong Gao's avatar
Tong Gao committed
507
508
509

Some code in this project is cited and modified from [OpenICL](https://github.com/Shark-NLP/OpenICL).

Zaida Zhou's avatar
Zaida Zhou committed
510
Some datasets and prompt implementations are modified from [chain-of-thought-hub](https://github.com/FranxYao/chain-of-thought-hub) and [instruct-eval](https://github.com/declare-lab/instruct-eval).
Leymore's avatar
Leymore committed
511

Songyang Zhang's avatar
Songyang Zhang committed
512
## 🖊️ Citation
Tong Gao's avatar
Tong Gao committed
513
514
515
516
517

```bibtex
@misc{2023opencompass,
    title={OpenCompass: A Universal Evaluation Platform for Foundation Models},
    author={OpenCompass Contributors},
Songyang Zhang's avatar
Songyang Zhang committed
518
    howpublished = {\url{https://github.com/open-compass/opencompass}},
Tong Gao's avatar
Tong Gao committed
519
520
521
    year={2023}
}
```
Songyang Zhang's avatar
Songyang Zhang committed
522
523

<p align="right"><a href="#top">🔝Back to top</a></p>