Unverified Commit 2b75b110 authored by shivalika-singh's avatar shivalika-singh Committed by GitHub
Browse files

Add Global MMLU Lite (#2567)



* add global mmlu lite

* add global mmlu lite

* fix bugs

* add task README.md

* Update README.md

* Update tasks README.md

* Update README.md

* update readme

---------
Co-authored-by: default avatarshivi <shivalikasingh95@gmail.com>
parent 8558b8d4
...@@ -45,6 +45,7 @@ ...@@ -45,6 +45,7 @@
| [fld](fld/README.md) | Tasks involving free-form and directed dialogue understanding. | English | | [fld](fld/README.md) | Tasks involving free-form and directed dialogue understanding. | English |
| [french_bench](french_bench/README.md) | Set of tasks designed to assess language model performance in French. | French| | [french_bench](french_bench/README.md) | Set of tasks designed to assess language model performance in French. | French|
| [galician_bench](galician_bench/README.md) | Collection of tasks in Galician encompassing various evaluation areas. | Galician | | [galician_bench](galician_bench/README.md) | Collection of tasks in Galician encompassing various evaluation areas. | Galician |
| [global_mmlu](global_mmlu/README.md) | Collection of culturally sensitive and culturally agnostic MMLU tasks in 15 languages with human translations or post-edits. | Multiple (15 languages) |
| [glue](glue/README.md) | General Language Understanding Evaluation benchmark to test broad language abilities. | English | | [glue](glue/README.md) | General Language Understanding Evaluation benchmark to test broad language abilities. | English |
| [gpqa](gpqa/README.md) | Tasks designed for general public question answering and knowledge verification. | English | | [gpqa](gpqa/README.md) | Tasks designed for general public question answering and knowledge verification. | English |
| [gsm8k](gsm8k/README.md) | A benchmark of grade school math problems aimed at evaluating reasoning capabilities. | English | | [gsm8k](gsm8k/README.md) | A benchmark of grade school math problems aimed at evaluating reasoning capabilities. | English |
......
# Global-MMLU
### Paper
Title: `Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation`
Abstract: [https://arxiv.org/abs/2412.03304](https://arxiv.org/abs/2412.03304)
Global-MMLU-Lite is a balanced collection of culturally sensitive and culturally agnostic MMLU tasks. It is designed for efficient evaluation of multilingual models in 15 languages (including English). Only languages with human translations and post-edits in the original [Global-MMLU](https://huggingface.co/datasets/CohereForAI/Global-MMLU) 🌍 dataset have been included in the lite version.
Homepage: [https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite](https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite)
### Citation
```bibtex
@misc{singh2024globalmmluunderstandingaddressing,
title={Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation},
author={Shivalika Singh and Angelika Romanou and Clémentine Fourrier and David I. Adelani and Jian Gang Ngui and Daniel Vila-Suero and Peerat Limkonchotiwat and Kelly Marchisio and Wei Qi Leong and Yosephine Susanto and Raymond Ng and Shayne Longpre and Wei-Yin Ko and Madeline Smith and Antoine Bosselut and Alice Oh and Andre F. T. Martins and Leshem Choshen and Daphne Ippolito and Enzo Ferrante and Marzieh Fadaee and Beyza Ermis and Sara Hooker},
year={2024},
eprint={2412.03304},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.03304},
}
```
tag:
- global_mmlu
dataset_path: CohereForAI/Global-MMLU-Lite
test_split: test
fewshot_split: dev
fewshot_config:
sampler: default
output_type: multiple_choice
doc_to_text: "{{question.strip()}}\nA. {{option_a}}\nB. {{option_b}}\nC. {{option_c}}\nD. {{option_d}}\nAnswer:"
doc_to_choice: ["A", "B", "C", "D"]
doc_to_target: answer
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
version: 0.0
import yaml
languages = [
"en",
"ar",
"fr",
"es",
"hi",
"de",
"id",
"it",
"ja",
"ko",
"pt",
"zh",
"yo",
"bn",
"sw",
]
def main() -> None:
for language in languages:
file_name = f"global_mmlu_{language}.yaml"
try:
with open(f"{file_name}", "w") as f:
f.write("# Generated by _generate_configs.py\n")
yaml.dump(
{
"include": "_default_yaml",
"task": f"global_mmlu_{language}",
"dataset_name": language,
},
f,
)
except FileExistsError:
pass
if __name__ == "__main__":
main()
# Generated by _generate_configs.py
dataset_name: ar
include: _default_yaml
task: global_mmlu_ar
# Generated by _generate_configs.py
dataset_name: bn
include: _default_yaml
task: global_mmlu_bn
# Generated by _generate_configs.py
dataset_name: de
include: _default_yaml
task: global_mmlu_de
# Generated by _generate_configs.py
dataset_name: en
include: _default_yaml
task: global_mmlu_en
# Generated by _generate_configs.py
dataset_name: es
include: _default_yaml
task: global_mmlu_es
# Generated by _generate_configs.py
dataset_name: fr
include: _default_yaml
task: global_mmlu_fr
# Generated by _generate_configs.py
dataset_name: hi
include: _default_yaml
task: global_mmlu_hi
# Generated by _generate_configs.py
dataset_name: id
include: _default_yaml
task: global_mmlu_id
# Generated by _generate_configs.py
dataset_name: it
include: _default_yaml
task: global_mmlu_it
# Generated by _generate_configs.py
dataset_name: ja
include: _default_yaml
task: global_mmlu_ja
# Generated by _generate_configs.py
dataset_name: ko
include: _default_yaml
task: global_mmlu_ko
# Generated by _generate_configs.py
dataset_name: pt
include: _default_yaml
task: global_mmlu_pt
# Generated by _generate_configs.py
dataset_name: sw
include: _default_yaml
task: global_mmlu_sw
# Generated by _generate_configs.py
dataset_name: yo
include: _default_yaml
task: global_mmlu_yo
# Generated by _generate_configs.py
dataset_name: zh
include: _default_yaml
task: global_mmlu_zh
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment