README.md 18.3 KB
Newer Older
Tong Gao's avatar
Tong Gao committed
1
2
3
4
5
<div align="center">
  <img src="docs/en/_static/image/logo.svg" width="500px"/>
  <br />
  <br />

Songyang Zhang's avatar
Songyang Zhang committed
6
7
8
9
10
11
12
[![][github-release-shield]][github-release-link]
[![][github-releasedate-shield]][github-releasedate-link]
[![][github-contributors-shield]][github-contributors-link]<br>
[![][github-forks-shield]][github-forks-link]
[![][github-stars-shield]][github-stars-link]
[![][github-issues-shield]][github-issues-link]
[![][github-license-shield]][github-license-link]
Tong Gao's avatar
Tong Gao committed
13

Hubert's avatar
Hubert committed
14
<!-- [![PyPI](https://badge.fury.io/py/opencompass.svg)](https://pypi.org/project/opencompass/) -->
Tong Gao's avatar
Tong Gao committed
15

gaotongxiao's avatar
gaotongxiao committed
16
[🌐Website](https://opencompass.org.cn/) |
Songyang Zhang's avatar
Songyang Zhang committed
17
18
[📖CompassHub](https://hub.opencompass.org.cn/home) |
[📊CompassRank](https://rank.opencompass.org.cn/home) |
Tong Gao's avatar
Tong Gao committed
19
[📘Documentation](https://opencompass.readthedocs.io/en/latest/) |
20
[🛠️Installation](https://opencompass.readthedocs.io/en/latest/get_started/installation.html) |
Songyang Zhang's avatar
Songyang Zhang committed
21
[🤔Reporting Issues](https://github.com/open-compass/opencompass/issues/new/choose)
Tong Gao's avatar
Tong Gao committed
22
23
24

English | [简体中文](README_zh-CN.md)

Songyang Zhang's avatar
Songyang Zhang committed
25
26
[![][github-trending-shield]][github-trending-url]

Tong Gao's avatar
Tong Gao committed
27
28
</div>

29
<p align="center">
30
    👋 join us on <a href="https://discord.gg/KKwfEbFj7U" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=opencompass" target="_blank">WeChat</a>
31
32
</p>

Songyang Zhang's avatar
Songyang Zhang committed
33
34
35
36
> \[!IMPORTANT\]
>
> **Star Us**, You will receive all release notifications from GitHub without any delay ~ ⭐️

Songyang Zhang's avatar
Songyang Zhang committed
37
## 📣 OpenCompass 2.0
Songyang Zhang's avatar
Songyang Zhang committed
38

Songyang Zhang's avatar
Songyang Zhang committed
39
40
We are thrilled to introduce OpenCompass 2.0, an advanced suite featuring three key components: [CompassKit](https://github.com/open-compass), [CompassHub](https://hub.opencompass.org.cn/home), and [CompassRank](https://rank.opencompass.org.cn/home).
![oc20](https://github.com/tonysy/opencompass/assets/7881589/90dbe1c0-c323-470a-991e-2b37ab5350b2)
Songyang Zhang's avatar
Songyang Zhang committed
41

Songyang Zhang's avatar
Songyang Zhang committed
42
**CompassRank** has been significantly enhanced into the leaderboards that now incorporates both open-source benchmarks and proprietary benchmarks. This upgrade allows for a more comprehensive evaluation of models across the industry.
Songyang Zhang's avatar
Songyang Zhang committed
43

Songyang Zhang's avatar
Songyang Zhang committed
44
**CompassHub** presents a pioneering benchmark browser interface, designed to simplify and expedite the exploration and utilization of an extensive array of benchmarks for researchers and practitioners alike. To enhance the visibility of your own benchmark within the community, we warmly invite you to contribute it to CompassHub. You may initiate the submission process by clicking [here](https://hub.opencompass.org.cn/dataset-submit).
Songyang Zhang's avatar
Songyang Zhang committed
45

Songyang Zhang's avatar
Songyang Zhang committed
46
**CompassKit** is a powerful collection of evaluation toolkits specifically tailored for Large Language Models and Large Vision-language Models. It provides an extensive set of tools to assess and measure the performance of these complex models effectively. Welcome to try our toolkits for in your research and products.
Songyang Zhang's avatar
Songyang Zhang committed
47

Songyang Zhang's avatar
Songyang Zhang committed
48
49
50
51
52
53
54
55
<details>
  <summary><kbd>Star History</kbd></summary>
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=open-compass%2Fopencompass&theme=dark&type=Date">
    <img width="100%" src="https://api.star-history.com/svg?repos=open-compass%2Fopencompass&type=Date">
  </picture>
</details>

Songyang Zhang's avatar
Songyang Zhang committed
56
57
58
## 🧭	Welcome

to **OpenCompass**!
Tong Gao's avatar
Tong Gao committed
59
60
61

Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models.

Songyang Zhang's avatar
Songyang Zhang committed
62
63
🚩🚩🚩 Explore opportunities at OpenCompass! We're currently **hiring full-time researchers/engineers and interns**. If you're passionate about LLM and OpenCompass, don't hesitate to reach out to us via [email](mailto:zhangsongyang@pjlab.org.cn). We'd love to hear from you!

64
65
66
🔥🔥🔥 We are delighted to announce that **the OpenCompass has been recommended by the Meta AI**, click [Get Started](https://ai.meta.com/llama/get-started/#validation) of Llama for more information.

> **Attention**<br />
Sakshi Umredkar's avatar
Sakshi Umredkar committed
67
> We launch the OpenCompass Collaboration project, welcome to support diverse evaluation benchmarks into OpenCompass!
Songyang Zhang's avatar
Songyang Zhang committed
68
> Clike [Issue](https://github.com/open-compass/opencompass/issues/248) for more information.
Songyang Zhang's avatar
Songyang Zhang committed
69
70
> Let's work together to build a more powerful OpenCompass toolkit!

Songyang Zhang's avatar
Songyang Zhang committed
71
## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
72

73
74
- **\[2024.05.08\]** We supported the evaluation of 4 MoE models: [Mixtral-8x22B-v0.1](configs/models/mixtral/hf_mixtral_8x22b_v0_1.py), [Mixtral-8x22B-Instruct-v0.1](configs/models/mixtral/hf_mixtral_8x22b_instruct_v0_1.py), [Qwen1.5-MoE-A2.7B](configs/models/qwen/hf_qwen1_5_moe_a2_7b.py), [Qwen1.5-MoE-A2.7B-Chat](configs/models/qwen/hf_qwen1_5_moe_a2_7b_chat.py). Try them out now!
- **\[2024.04.30\]** We supported evaluating a model's compression efficiency by calculating its Bits per Character (BPC) metric on an [external corpora](configs/datasets/llm_compression/README.md) ([official paper](https://github.com/hkust-nlp/llm-compression-intelligence)). Check out the [llm-compression](configs/eval_llm_compression.py) evaluation config now! 🔥🔥🔥
75
- **\[2024.04.29\]** We report the performance of several famous LLMs on the common benchmarks, welcome to [documentation](https://opencompass.readthedocs.io/en/latest/user_guides/corebench.html) for more information! 🔥🔥🔥.
76
- **\[2024.04.26\]** We deprecated the multi-madality evaluating function from OpenCompass, related implement has moved to [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), welcome to use! 🔥🔥🔥.
77
- **\[2024.04.26\]** We supported the evaluation of [ArenaHard](configs/eval_subjective_arena_hard.py)  welcome to try!🔥🔥🔥.
78
79
80
- **\[2024.04.22\]** We supported the evaluation of [LLaMA3](configs/models/hf_llama/hf_llama3_8b.py)[LLaMA3-Instruct](configs/models/hf_llama/hf_llama3_8b_instruct.py), welcome to try! 🔥🔥🔥
- **\[2024.02.29\]** We supported the MT-Bench, AlpacalEval and AlignBench, more information can be found [here](https://opencompass.readthedocs.io/en/latest/advanced_guides/subjective_evaluation.html)
- **\[2024.01.30\]** We release OpenCompass 2.0. Click  [CompassKit](https://github.com/open-compass), [CompassHub](https://hub.opencompass.org.cn/home), and [CompassRank](https://rank.opencompass.org.cn/home) for more information !
Songyang Zhang's avatar
Songyang Zhang committed
81
82

> [More](docs/en/notes/news.md)
Yuan Liu's avatar
Yuan Liu committed
83

Songyang Zhang's avatar
Songyang Zhang committed
84
## ✨ Introduction
Tong Gao's avatar
Tong Gao committed
85

86
87
![image](https://github.com/open-compass/opencompass/assets/22607038/f45fe125-4aed-4f8c-8fe8-df4efb41a8ea)

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
88
OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Its main features include:
Tong Gao's avatar
Tong Gao committed
89

Leymore's avatar
Leymore committed
90
- **Comprehensive support for models and datasets**: Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 70+ datasets with about 400,000 questions, comprehensively evaluating the capabilities of the models in five dimensions.
Tong Gao's avatar
Tong Gao committed
91
92
93

- **Efficient distributed evaluation**: One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours.

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
94
- **Diversified evaluation paradigms**: Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue-type prompt templates, to easily stimulate the maximum performance of various models.
Tong Gao's avatar
Tong Gao committed
95
96
97

- **Modular design with high extensibility**: Want to add new models or datasets, customize an advanced task division strategy, or even support a new cluster management system? Everything about OpenCompass can be easily expanded!

ayushrakesh's avatar
ayushrakesh committed
98
- **Experiment management and reporting mechanism**: Use config files to fully record each experiment, and support real-time reporting of results.
Tong Gao's avatar
Tong Gao committed
99

Songyang Zhang's avatar
Songyang Zhang committed
100
## 📊 Leaderboard
Tong Gao's avatar
Tong Gao committed
101

fanqiNO1's avatar
fanqiNO1 committed
102
We provide [OpenCompass Leaderboard](https://rank.opencompass.org.cn/home) for the community to rank all public models and API models. If you would like to join the evaluation, please provide the model repository URL or a standard API interface to the email address `opencompass@pjlab.org.cn`.
Tong Gao's avatar
Tong Gao committed
103

Songyang Zhang's avatar
Songyang Zhang committed
104
<p align="right"><a href="#top">🔝Back to top</a></p>
Tong Gao's avatar
Tong Gao committed
105

Leymore's avatar
Leymore committed
106
107
108
109
## 🛠️ Installation

Below are the steps for quick installation and datasets preparation.

110
111
112
113
114
### 💻 Environment Setup

#### Open-source Models with GPU

```bash
Leymore's avatar
Leymore committed
115
116
117
118
119
conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
120
121
122
123
124
125
126
127
128
129
```

#### API Models with CPU-only

```bash
conda create -n opencompass python=3.10 pytorch torchvision torchaudio cpuonly -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
130
# also please install requirements packages via `pip install -r requirements/api.txt` for API models if needed.
131
132
133
134
135
```

### 📂 Data Preparation

```bash
Leymore's avatar
Leymore committed
136
# Download dataset to data/ folder
137
138
wget https://github.com/open-compass/opencompass/releases/download/0.2.2.rc1/OpenCompassData-core-20240207.zip
unzip OpenCompassData-core-20240207.zip
Leymore's avatar
Leymore committed
139
140
```

141
Some third-party features, like Humaneval and Llama, may require additional steps to work properly, for detailed steps please refer to the [Installation Guide](https://opencompass.readthedocs.io/en/latest/get_started/installation.html).
Leymore's avatar
Leymore committed
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164

<p align="right"><a href="#top">🔝Back to top</a></p>

## 🏗️ ️Evaluation

After ensuring that OpenCompass is installed correctly according to the above steps and the datasets are prepared, you can evaluate the performance of the LLaMA-7b model on the MMLU and C-Eval datasets using the following command:

```bash
python run.py --models hf_llama_7b --datasets mmlu_ppl ceval_ppl
```

OpenCompass has predefined configurations for many models and datasets. You can list all available model and dataset configurations using the [tools](./docs/en/tools.md#list-configs).

```bash
# List all configurations
python tools/list_configs.py
# List all configurations related to llama and mmlu
python tools/list_configs.py llama mmlu
```

You can also evaluate other HuggingFace models via command line. Taking LLaMA-7b as an example:

```bash
165
python run.py --datasets ceval_ppl mmlu_ppl --hf-type base --hf-path huggyllama/llama-7b
Leymore's avatar
Leymore committed
166
167
```

168
169
170
171
> \[!TIP\]
>
> configuration with `_ppl` is designed for base model typically.
> configuration with `_gen` can be used for both base model and chat model.
Tong Gao's avatar
Tong Gao committed
172

173
Through the command line or configuration files, OpenCompass also supports evaluating APIs or custom models, as well as more diversified evaluation strategies. Please read the [Quick Start](https://opencompass.readthedocs.io/en/latest/get_started/quick_start.html) to learn how to run an evaluation task.
Leymore's avatar
Leymore committed
174
175
176

<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
177
## 📖 Dataset Support
Tong Gao's avatar
Tong Gao committed
178
179
180
181
182
183
184
185
186
187
188
189
190
191

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Language</b>
      </td>
      <td>
        <b>Knowledge</b>
      </td>
      <td>
        <b>Reasoning</b>
      </td>
      <td>
Leymore's avatar
Leymore committed
192
        <b>Examination</b>
Tong Gao's avatar
Tong Gao committed
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
      </td>
    </tr>
    <tr valign="top">
      <td>
<details open>
<summary><b>Word Definition</b></summary>

- WiC
- SummEdits

</details>

<details open>
<summary><b>Idiom Learning</b></summary>

- CHID

</details>

<details open>
<summary><b>Semantic Similarity</b></summary>

- AFQMC
- BUSTM

</details>

<details open>
<summary><b>Coreference Resolution</b></summary>

- CLUEWSC
- WSC
- WinoGrande

</details>

<details open>
<summary><b>Translation</b></summary>

- Flores
Leymore's avatar
Leymore committed
233
- IWSLT2017
Tong Gao's avatar
Tong Gao committed
234
235

</details>
Leymore's avatar
Leymore committed
236

Tong Gao's avatar
Tong Gao committed
237
<details open>
Leymore's avatar
Leymore committed
238
<summary><b>Multi-language Question Answering</b></summary>
Tong Gao's avatar
Tong Gao committed
239

Leymore's avatar
Leymore committed
240
241
- TyDi-QA
- XCOPA
Tong Gao's avatar
Tong Gao committed
242
243
244
245

</details>

<details open>
Leymore's avatar
Leymore committed
246
<summary><b>Multi-language Summary</b></summary>
Tong Gao's avatar
Tong Gao committed
247

Leymore's avatar
Leymore committed
248
249
250
251
252
253
254
255
256
257
258
259
- XLSum

</details>
      </td>
      <td>
<details open>
<summary><b>Knowledge Question Answering</b></summary>

- BoolQ
- CommonSenseQA
- NaturalQuestions
- TriviaQA
Tong Gao's avatar
Tong Gao committed
260
261
262
263
264
265
266
267
268
269
270
271
272
273

</details>
      </td>
      <td>
<details open>
<summary><b>Textual Entailment</b></summary>

- CMNLI
- OCNLI
- OCNLI_FC
- AX-b
- AX-g
- CB
- RTE
Leymore's avatar
Leymore committed
274
- ANLI
Tong Gao's avatar
Tong Gao committed
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301

</details>

<details open>
<summary><b>Commonsense Reasoning</b></summary>

- StoryCloze
- COPA
- ReCoRD
- HellaSwag
- PIQA
- SIQA

</details>

<details open>
<summary><b>Mathematical Reasoning</b></summary>

- MATH
- GSM8K

</details>

<details open>
<summary><b>Theorem Application</b></summary>

- TheoremQA
Leymore's avatar
Leymore committed
302
303
- StrategyQA
- SciBench
Tong Gao's avatar
Tong Gao committed
304
305
306
307
308
309
310
311
312
313
314
315
316
317

</details>

<details open>
<summary><b>Comprehensive Reasoning</b></summary>

- BBH

</details>
      </td>
      <td>
<details open>
<summary><b>Junior High, High School, University, Professional Examinations</b></summary>

Leymore's avatar
Leymore committed
318
- C-Eval
Tong Gao's avatar
Tong Gao committed
319
320
321
- AGIEval
- MMLU
- GAOKAO-Bench
322
- CMMLU
Tong Gao's avatar
Tong Gao committed
323
- ARC
Leymore's avatar
Leymore committed
324
- Xiezhi
Tong Gao's avatar
Tong Gao committed
325
326

</details>
Leymore's avatar
Leymore committed
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342

<details open>
<summary><b>Medical Examinations</b></summary>

- CMB

</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Understanding</b>
Tong Gao's avatar
Tong Gao committed
343
344
      </td>
      <td>
Leymore's avatar
Leymore committed
345
346
347
348
349
350
351
352
353
354
355
        <b>Long Context</b>
      </td>
      <td>
        <b>Safety</b>
      </td>
      <td>
        <b>Code</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
Tong Gao's avatar
Tong Gao committed
356
357
358
359
360
361
362
363
<details open>
<summary><b>Reading Comprehension</b></summary>

- C3
- CMRC
- DRCD
- MultiRC
- RACE
Leymore's avatar
Leymore committed
364
365
366
- DROP
- OpenBookQA
- SQuAD2.0
Tong Gao's avatar
Tong Gao committed
367
368
369
370
371
372
373
374
375

</details>

<details open>
<summary><b>Content Summary</b></summary>

- CSL
- LCSTS
- XSum
Leymore's avatar
Leymore committed
376
- SummScreen
Tong Gao's avatar
Tong Gao committed
377
378
379
380
381
382
383
384
385
386

</details>

<details open>
<summary><b>Content Analysis</b></summary>

- EPRSTMT
- LAMBADA
- TNEWS

Leymore's avatar
Leymore committed
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
</details>
      </td>
      <td>
<details open>
<summary><b>Long Context Understanding</b></summary>

- LEval
- LongBench
- GovReports
- NarrativeQA
- Qasper

</details>
      </td>
      <td>
<details open>
<summary><b>Safety</b></summary>

- CivilComments
- CrowsPairs
- CValues
- JigsawMultilingual
- TruthfulQA

</details>
<details open>
<summary><b>Robustness</b></summary>

- AdvGLUE

</details>
      </td>
      <td>
<details open>
<summary><b>Code</b></summary>

- HumanEval
- HumanEvalX
- MBPP
- APPs
- DS1000

Tong Gao's avatar
Tong Gao committed
429
430
431
432
433
434
435
436
</details>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
437
## 📖 Model Support
Tong Gao's avatar
Tong Gao committed
438
439
440
441
442

<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
Songyang Zhang's avatar
Songyang Zhang committed
443
        <b>Open-source Models</b>
Tong Gao's avatar
Tong Gao committed
444
445
446
447
      </td>
      <td>
        <b>API Models</b>
      </td>
Songyang Zhang's avatar
Songyang Zhang committed
448
      <!-- <td>
Tong Gao's avatar
Tong Gao committed
449
        <b>Custom Models</b>
Songyang Zhang's avatar
Songyang Zhang committed
450
      </td> -->
Tong Gao's avatar
Tong Gao committed
451
452
453
    </tr>
    <tr valign="top">
      <td>
Hubert's avatar
Hubert committed
454

455
456
- [InternLM](https://github.com/InternLM/InternLM)
- [LLaMA](https://github.com/facebookresearch/llama)
457
- [LLaMA3](https://github.com/meta-llama/llama3)
458
459
460
461
462
463
464
465
466
- [Vicuna](https://github.com/lm-sys/FastChat)
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baichuan](https://github.com/baichuan-inc)
- [WizardLM](https://github.com/nlpxucan/WizardLM)
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
- [TigerBot](https://github.com/TigerResearch/TigerBot)
- [Qwen](https://github.com/QwenLM/Qwen)
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
Songyang Zhang's avatar
Songyang Zhang committed
467
- [Gemma](https://huggingface.co/google/gemma-7b)
gaotongxiao's avatar
gaotongxiao committed
468
- ...
Tong Gao's avatar
Tong Gao committed
469
470
471
472

</td>
<td>

Songyang Zhang's avatar
Songyang Zhang committed
473
- OpenAI
Songyang Zhang's avatar
Songyang Zhang committed
474
- Gemini
Leymore's avatar
Leymore committed
475
- Claude
476
477
478
479
480
481
482
483
484
- ZhipuAI(ChatGLM)
- Baichuan
- ByteDance(YunQue)
- Huawei(PanGu)
- 360
- Baidu(ERNIEBot)
- MiniMax(ABAB-Chat)
- SenseTime(nova)
- Xunfei(Spark)
Tong Gao's avatar
Tong Gao committed
485
486
487
488
489
490
491
492
- ……

</td>

</tr>
  </tbody>
</table>

Songyang Zhang's avatar
Songyang Zhang committed
493
494
<p align="right"><a href="#top">🔝Back to top</a></p>

Songyang Zhang's avatar
Songyang Zhang committed
495
496
## 🔜 Roadmap

Songyang Zhang's avatar
Songyang Zhang committed
497
- [x] Subjective Evaluation
Songyang Zhang's avatar
Songyang Zhang committed
498
  - [ ] Release CompassAreana
Songyang Zhang's avatar
Songyang Zhang committed
499
  - [x] Subjective evaluation.
500
- [x] Long-context
Songyang Zhang's avatar
Songyang Zhang committed
501
  - [x] Long-context evaluation with extensive datasets.
Songyang Zhang's avatar
Songyang Zhang committed
502
  - [ ] Long-context leaderboard.
Songyang Zhang's avatar
Songyang Zhang committed
503
- [x] Coding
Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
504
  - [ ] Coding evaluation leaderboard.
505
  - [x] Non-python language evaluation service.
Songyang Zhang's avatar
Songyang Zhang committed
506
- [x] Agent
Songyang Zhang's avatar
Songyang Zhang committed
507
  - [ ] Support various agenet framework.
Songyang Zhang's avatar
Songyang Zhang committed
508
  - [x] Evaluation of tool use of the LLMs.
509
510
- [x] Robustness
  - [x] Support various attack method
Songyang Zhang's avatar
Songyang Zhang committed
511

512
513
## 👷‍♂️ Contributing

Himanshu Kumar Mahto's avatar
Himanshu Kumar Mahto committed
514
We appreciate all contributions to improving OpenCompass. Please refer to the [contributing guideline](https://opencompass.readthedocs.io/en/latest/notes/contribution_guide.html) for the best practice.
515

Songyang Zhang's avatar
Songyang Zhang committed
516
517
518
519
520
521
522
523
524
525
526
527
528
529
<!-- Copy-paste in your Readme.md file -->

<!-- Made with [OSS Insight](https://ossinsight.io/) -->

<a href="https://github.com/open-compass/opencompass/graphs/contributors" target="_blank">
  <table>
    <tr>
      <th colspan="2">
        <br><img src="https://contrib.rocks/image?repo=open-compass/opencompass"><br><br>
      </th>
    </tr>
  </table>
</a>

Songyang Zhang's avatar
Songyang Zhang committed
530
## 🤝 Acknowledgements
Tong Gao's avatar
Tong Gao committed
531
532
533

Some code in this project is cited and modified from [OpenICL](https://github.com/Shark-NLP/OpenICL).

Zaida Zhou's avatar
Zaida Zhou committed
534
Some datasets and prompt implementations are modified from [chain-of-thought-hub](https://github.com/FranxYao/chain-of-thought-hub) and [instruct-eval](https://github.com/declare-lab/instruct-eval).
Leymore's avatar
Leymore committed
535

Songyang Zhang's avatar
Songyang Zhang committed
536
## 🖊️ Citation
Tong Gao's avatar
Tong Gao committed
537
538
539
540
541

```bibtex
@misc{2023opencompass,
    title={OpenCompass: A Universal Evaluation Platform for Foundation Models},
    author={OpenCompass Contributors},
Songyang Zhang's avatar
Songyang Zhang committed
542
    howpublished = {\url{https://github.com/open-compass/opencompass}},
Tong Gao's avatar
Tong Gao committed
543
544
545
    year={2023}
}
```
Songyang Zhang's avatar
Songyang Zhang committed
546
547

<p align="right"><a href="#top">🔝Back to top</a></p>
Songyang Zhang's avatar
Songyang Zhang committed
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564

[github-contributors-link]: https://github.com/open-compass/opencompass/graphs/contributors
[github-contributors-shield]: https://img.shields.io/github/contributors/open-compass/opencompass?color=c4f042&labelColor=black&style=flat-square
[github-forks-link]: https://github.com/open-compass/opencompass/network/members
[github-forks-shield]: https://img.shields.io/github/forks/open-compass/opencompass?color=8ae8ff&labelColor=black&style=flat-square
[github-issues-link]: https://github.com/open-compass/opencompass/issues
[github-issues-shield]: https://img.shields.io/github/issues/open-compass/opencompass?color=ff80eb&labelColor=black&style=flat-square
[github-license-link]: https://github.com/open-compass/opencompass/blob/main/LICENSE
[github-license-shield]: https://img.shields.io/github/license/open-compass/opencompass?color=white&labelColor=black&style=flat-square
[github-release-link]: https://github.com/open-compass/opencompass/releases
[github-release-shield]: https://img.shields.io/github/v/release/open-compass/opencompass?color=369eff&labelColor=black&logo=github&style=flat-square
[github-releasedate-link]: https://github.com/open-compass/opencompass/releases
[github-releasedate-shield]: https://img.shields.io/github/release-date/open-compass/opencompass?labelColor=black&style=flat-square
[github-stars-link]: https://github.com/open-compass/opencompass/stargazers
[github-stars-shield]: https://img.shields.io/github/stars/open-compass/opencompass?color=ffcb47&labelColor=black&style=flat-square
[github-trending-shield]: https://trendshift.io/api/badge/repositories/6630
[github-trending-url]: https://trendshift.io/repositories/6630