Unverified Commit fec9dde7 authored by Luis Cosio's avatar Luis Cosio Committed by GitHub
Browse files

feat: Add mmlu-redux and it's spanish transaltion as generative task definitions (#2705)



* Added benchmark

* Added more testing

* Added task definition for mmlu_redux and mmlu_redux_spanish

* Add MMLU Redux English and Spanish tasks with YAML fixes and READMEs

* Add remaining MMLU Redux YAMLs and updated tasks README

* Add MMLU Redux English and Spanish tasks with YAML fixes and READMEs

* Add MMLU Redux changes from pr-2705

* Resolve pre-commit hook and pytest overlapping group issues by adding mmlu_redux_spanish task entries and unique subgroup names

* Enhance retry logic to prevent 429 error when using Hugging Face API for tests, apply pre-commit fixes

* Revert python test changes and comments one task group to avoid Hugging Face rate limit and task failure

---------
Co-authored-by: default avatarCT-6282 <ricardo.godric@hotmail.com>
parent 368275f3
...@@ -16,7 +16,7 @@ provided to the individual README.md files for each subfolder. ...@@ -16,7 +16,7 @@ provided to the individual README.md files for each subfolder.
| [arabic_leaderboard_complete](arabic_leaderboard_complete/README.md) | A full version of the tasks in the Open Arabic LLM Leaderboard, focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) | | [arabic_leaderboard_complete](arabic_leaderboard_complete/README.md) | A full version of the tasks in the Open Arabic LLM Leaderboard, focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) |
| [arabic_leaderboard_light](arabic_leaderboard_light/README.md) | A light version of the tasks in the Open Arabic LLM Leaderboard (i.e., 10% samples of the test set in the original benchmarks), focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) | | [arabic_leaderboard_light](arabic_leaderboard_light/README.md) | A light version of the tasks in the Open Arabic LLM Leaderboard (i.e., 10% samples of the test set in the original benchmarks), focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) |
| [arabicmmlu](arabicmmlu/README.md) | Localized Arabic version of MMLU with multiple-choice questions from 40 subjects. | Arabic | | [arabicmmlu](arabicmmlu/README.md) | Localized Arabic version of MMLU with multiple-choice questions from 40 subjects. | Arabic |
| [ArabCulture](arab_culture/README.md) | Benchmark for evaluating modeles' commonsense cultural knowledge across different 13 different Arab Countries. | Arabic | | [ArabCulture](arab_culture/README.md) | Benchmark for evaluating models' commonsense cultural knowledge across different 13 different Arab Countries. | Arabic |
| [AraDICE](aradice/README.md) | A collection of multiple tasks carefully designed to evaluate dialectal and cultural capabilities in large language models (LLMs). | Arabic | | [AraDICE](aradice/README.md) | A collection of multiple tasks carefully designed to evaluate dialectal and cultural capabilities in large language models (LLMs). | Arabic |
| [arc](arc/README.md) | Tasks involving complex reasoning over a diverse set of questions. | English | | [arc](arc/README.md) | Tasks involving complex reasoning over a diverse set of questions. | English |
| [arithmetic](arithmetic/README.md) | Tasks involving numerical computations and arithmetic reasoning. | English | | [arithmetic](arithmetic/README.md) | Tasks involving numerical computations and arithmetic reasoning. | English |
...@@ -41,12 +41,12 @@ provided to the individual README.md files for each subfolder. ...@@ -41,12 +41,12 @@ provided to the individual README.md files for each subfolder.
| [cmmlu](cmmlu/README.md) | Multi-subject multiple choice question tasks for comprehensive academic assessment. | Chinese | | [cmmlu](cmmlu/README.md) | Multi-subject multiple choice question tasks for comprehensive academic assessment. | Chinese |
| code_x_glue | Tasks that involve understanding and generating code across multiple programming languages. | Go, Java, JS, PHP, Python, Ruby | | code_x_glue | Tasks that involve understanding and generating code across multiple programming languages. | Go, Java, JS, PHP, Python, Ruby |
| [commonsense_qa](commonsense_qa/README.md) | CommonsenseQA, a multiple-choice QA dataset for measuring commonsense knowledge. | English | | [commonsense_qa](commonsense_qa/README.md) | CommonsenseQA, a multiple-choice QA dataset for measuring commonsense knowledge. | English |
| [copal_id](copal_id/README.md) United States | Indonesian causal commonsense reasoning dataset that captures local nuances. | Indonesian | | [copal_id](copal_id/README.md) United States | Indonesian causal commonsense reasoning dataset that captures local nuances. | Indonesian |
| [coqa](coqa/README.md) | Conversational question answering tasks to test dialog understanding. | English | | [coqa](coqa/README.md) | Conversational question answering tasks to test dialog understanding. | English |
| [crows_pairs](crows_pairs/README.md) | Tasks designed to test model biases in various sociodemographic groups. | English, French | | [crows_pairs](crows_pairs/README.md) | Tasks designed to test model biases in various sociodemographic groups. | English, French |
| [click](click/README.md) | A benchmark dataset of Cultural and Linguistic Intelligence in Korean (CLIcK), comprising 1,995 QA pairs sourced from official Korean exams and textbooks to test Korean cultural and linguistic knowledge. | Korean | | [click](click/README.md) | A benchmark dataset of Cultural and Linguistic Intelligence in Korean (CLIcK), comprising 1,995 QA pairs sourced from official Korean exams and textbooks to test Korean cultural and linguistic knowledge. | Korean |
| csatqa | Tasks related to SAT and other standardized testing questions for academic assessment. | Korean | | csatqa | Tasks related to SAT and other standardized testing questions for academic assessment. | Korean |
| [darija_bench](darija_bench/README.md) | Traditional NLP tasks (Translation, Summariation, etc..) for Moroccan Darija | Moroccan Darija (some MT) | | [darija_bench](darija_bench/README.md) | Traditional NLP tasks (Translation, Summarization, etc..) for Moroccan Darija | Moroccan Darija (some MT) |
| [darijahellaswag](darijahellaswag/README.md) | Moroccan Darija version of HellaSwag. | Moroccan Darija (MT) | | [darijahellaswag](darijahellaswag/README.md) | Moroccan Darija version of HellaSwag. | Moroccan Darija (MT) |
| [darijammlu](darijammlu/README.md) | Multiple-choice QA in Moroccan Darija (an Arabic dialect). | Moroccan Darija (MT) | | [darijammlu](darijammlu/README.md) | Multiple-choice QA in Moroccan Darija (an Arabic dialect). | Moroccan Darija (MT) |
| [discrim_eval](discrim_eval/README.md) | Prompts for binary decisions covering 70 scenarios to evaluate demographic bias. | English | | [discrim_eval](discrim_eval/README.md) | Prompts for binary decisions covering 70 scenarios to evaluate demographic bias. | English |
...@@ -58,7 +58,7 @@ provided to the individual README.md files for each subfolder. ...@@ -58,7 +58,7 @@ provided to the individual README.md files for each subfolder.
| [eus_exams](eus_exams/README.md) | Tasks based on various professional and academic exams in the Basque language. | Basque | | [eus_exams](eus_exams/README.md) | Tasks based on various professional and academic exams in the Basque language. | Basque |
| [eus_proficiency](eus_proficiency/README.md) | Tasks designed to test proficiency in the Basque language across various topics. | Basque | | [eus_proficiency](eus_proficiency/README.md) | Tasks designed to test proficiency in the Basque language across various topics. | Basque |
| [eus_reading](eus_reading/README.md) | Reading comprehension tasks specifically designed for the Basque language. | Basque | | [eus_reading](eus_reading/README.md) | Reading comprehension tasks specifically designed for the Basque language. | Basque |
| [eus_trivia](eus_trivia/README.md) | Trivia and knowledge testing tasks in the Basque language. | Basque | | [eus_trivia](eus_trivia/README.md) | Trivia atypicnd knowledge testing tasks in the Basque language. | Basque |
| [evalita_LLM](evalita_llm/README.md) | A native Italian benchmark with diverse tasks formats and multiple prompts. | Italian | | [evalita_LLM](evalita_llm/README.md) | A native Italian benchmark with diverse tasks formats and multiple prompts. | Italian |
| [fda](fda/README.md) | Tasks for extracting key-value pairs from FDA documents to test information extraction. | English | | [fda](fda/README.md) | Tasks for extracting key-value pairs from FDA documents to test information extraction. | English |
| [fld](fld/README.md) | Tasks involving free-form and directed dialogue understanding. | English | | [fld](fld/README.md) | Tasks involving free-form and directed dialogue understanding. | English |
...@@ -84,7 +84,7 @@ provided to the individual README.md files for each subfolder. ...@@ -84,7 +84,7 @@ provided to the individual README.md files for each subfolder.
| [jsonschema_bench](jsonschema_bench/README.md) | Evaluate the ability of LLMs to generate JSON objects that conform to a given JSON schema, including API, configuration files, and other structured data formats. | JSON | | [jsonschema_bench](jsonschema_bench/README.md) | Evaluate the ability of LLMs to generate JSON objects that conform to a given JSON schema, including API, configuration files, and other structured data formats. | JSON |
| [kbl](kbl/README.md) | Korean Benchmark for Legal Language Understanding. | Korean | | [kbl](kbl/README.md) | Korean Benchmark for Legal Language Understanding. | Korean |
| [kmmlu](kmmlu/README.md) | Knowledge-based multi-subject multiple choice questions for academic evaluation. | Korean | | [kmmlu](kmmlu/README.md) | Knowledge-based multi-subject multiple choice questions for academic evaluation. | Korean |
| [kobest](kobest/README.md) | A collection of tasks designed to evaluate understanding in Korean language. | Korean | | [kobest](kobest/README.md) | A collection of tasks designed to evaluate understanding in Korean language{Fecha: language. | Korean |
| [kormedmcqa](kormedmcqa/README.md) | Medical question answering tasks in Korean to test specialized domain knowledge. | Korean | | [kormedmcqa](kormedmcqa/README.md) | Medical question answering tasks in Korean to test specialized domain knowledge. | Korean |
| [lambada](lambada/README.md) | Tasks designed to predict the endings of text passages, testing language prediction skills. | English | | [lambada](lambada/README.md) | Tasks designed to predict the endings of text passages, testing language prediction skills. | English |
| [lambada_cloze](lambada_cloze/README.md) | Cloze-style LAMBADA dataset. | English | | [lambada_cloze](lambada_cloze/README.md) | Cloze-style LAMBADA dataset. | English |
...@@ -115,6 +115,8 @@ provided to the individual README.md files for each subfolder. ...@@ -115,6 +115,8 @@ provided to the individual README.md files for each subfolder.
| [minerva_math](minerva_math/README.md) | Mathematics-focused tasks requiring numerical reasoning and problem-solving skills. | English | | [minerva_math](minerva_math/README.md) | Mathematics-focused tasks requiring numerical reasoning and problem-solving skills. | English |
| [mlqa](mlqa/README.md) | MultiLingual Question Answering benchmark dataset for evaluating cross-lingual question answering performance. | English, Arabic, German, Spanish, Hindi, Vietnamese, Simplified Chinese | | [mlqa](mlqa/README.md) | MultiLingual Question Answering benchmark dataset for evaluating cross-lingual question answering performance. | English, Arabic, German, Spanish, Hindi, Vietnamese, Simplified Chinese |
| [mmlu](mmlu/README.md) | Massive Multitask Language Understanding benchmark for broad domain language evaluation. Several variants are supported. | English | | [mmlu](mmlu/README.md) | Massive Multitask Language Understanding benchmark for broad domain language evaluation. Several variants are supported. | English |
| [mmlu_redux](mmlu-redux/README.md) | Refined Massive Multitask Language Understanding benchmark for broad domain evaluation with improved data quality. | English |
| [mmlu_redux](mmlu-redux-spanish/README.md) | Refined Massive Multitask Language Understanding benchmark for broad domain evaluation with improved data quality. | Spanish |
| [mmlu_pro](mmlu_pro/README.md) | A refined set of MMLU, integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. | English | | [mmlu_pro](mmlu_pro/README.md) | A refined set of MMLU, integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. | English |
| [mmlu-pro-plus](mmlu-pro-plus/README.md) | A new test set for evaluating shortcut learning and higher-order reasoning of LLMs. | English | | [mmlu-pro-plus](mmlu-pro-plus/README.md) | A new test set for evaluating shortcut learning and higher-order reasoning of LLMs. | English |
| [mmlu_prox](mmlu_prox/README.md) | A multilingual benchmark that extends MMLU-Pro to multiple typologically diverse languages with human validation. | English, Japanese, Chinese, Korean, French, German, Spanish, Portuguese, Zulu, Swahili, Wolof, Yoruba, Thai, Arabic, Hindi, Bengali, Serbian, Hungarian, Vietnamese, Czech, Marathi, Afrikaans, Nepali, Telugu, Urdu, Russian, Indonesian, Italian, Ukrainian| | [mmlu_prox](mmlu_prox/README.md) | A multilingual benchmark that extends MMLU-Pro to multiple typologically diverse languages with human validation. | English, Japanese, Chinese, Korean, French, German, Spanish, Portuguese, Zulu, Swahili, Wolof, Yoruba, Thai, Arabic, Hindi, Bengali, Serbian, Hungarian, Vietnamese, Czech, Marathi, Afrikaans, Nepali, Telugu, Urdu, Russian, Indonesian, Italian, Ukrainian|
...@@ -187,6 +189,6 @@ provided to the individual README.md files for each subfolder. ...@@ -187,6 +189,6 @@ provided to the individual README.md files for each subfolder.
## Multimodal Tasks ## Multimodal Tasks
| Task Family | Description | Modality | | Task Family | Description | Modality |
|------------------------------|---------------------------------------------------------------------------------------------------------|-------------| | ---------------------------- | ------------------------------------------------------------------------------------------------------- | ----------- |
| [chartqa](chartqa/README.md) | A benchmark for question answering about charts that requires both visual and logical reasoning. | Image, Text | | [chartqa](chartqa/README.md) | A benchmark for question answering about charts that requires both visual and logical reasoning. | Image, Text |
| [mmmu](mmmu/README.md) | Evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge. | Image, Text | | [mmmu](mmmu/README.md) | Evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge. | Image, Text |
# Task-name
### Paper
Title: `Are We Donewith MMLU?`
Abstract: `https://arxiv.org/pdf/2406.04127`
`The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more, in Spanish`
Homepage: `https://huggingface.co/datasets/edinburgh-dawg/mmlu-redux-2.0`
### Citation
```
BibTeX
@misc{edinburgh2024mmlu,
title={Are We Done with MMLU?},
author={Aryo Pradipta Gema and Joshua Ong Jun Leang and Giwon Hong and Alessio Devoto and
Alberto Carlo Maria Mancino and Rohit Saxena and Xuanli He and Yu Zhao and Xiaotang Du and
MohammadRezaGhasemi Madani and Claire Barale and Robert McHardy and Joshua Harris and
Jean Kaddour and Emile van Krieken and Pasquale Minervini},
year={2025},
eprint={2406.04127},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups, Tags, and Tasks
#### Groups
- `stem`
- `other`
- `social sciences`
- `humanities`
#### Tasks
- `mmlu_stem_generative_spanish`
- `mmlu_other_generative_spanish`
- `mmlu_social_sciences_generative_spanish`
- `mmlu_humanities_generative_spanish`
### Checklist
For adding novel benchmarks/datasets to the library:
- [x] Is the task an existing benchmark in the literature?
- [x] Have you referenced the original paper that introduced the task?
- [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- [ ] Is the "Main" variant of this task clearly denoted?
- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
ver 1: PR #2705
First implementation
dataset_path: "amias-mx/mmlu-redux-2.0-spanish"
test_split: test
dataset_kwargs:
trust_remote_code: true
output_type: generate_until
doc_to_text: "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nPor favor, responde con la letra correcta (A, B, C o D) sin absolutamente nada adicional, solo la letra correcta:"
doc_to_target: "{{['A','B','C','D'][answer]}}"
target_delimiter: ":"
generation_kwargs:
until:
- "</s>"
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: true
filter_list:
- name: default
filter:
- function: regex
regex_pattern: "([ABCD])"
- function: take_first
metadata:
version: 3.0
group: mmlu_redux_spanish_generative
group_alias: mmlu_redux_spanish (generative)
task:
- group: stem_spanish
task:
- mmlu_stem_generative_spanish
aggregate_metric_list:
- metric: exact_match
weight_by_size: true
- group: other_spanish
task:
- mmlu_other_generative_spanish
aggregate_metric_list:
- metric: exact_match
weight_by_size: true
- group: social sciences_spanish
task:
- mmlu_social_sciences_generative_spanish
aggregate_metric_list:
- metric: exact_match
weight_by_size: true
# - group: humanities_spanish
# task:
# - mmlu_humanities_generative_spanish
# aggregate_metric_list:
# - metric: exact_match
# weight_by_size: true
aggregate_metric_list:
- aggregation: mean
metric: exact_match
weight_by_size: true
metadata:
version: 3
"dataset_name": "abstract_algebra"
"description":
"The following are multiple choice questions (with answers) about abstract\
\ algebra.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_abstract_algebra_generative_spanish"
"task_alias": "abstract_algebra_spanish"
"dataset_name": "anatomy"
"description":
"The following are multiple choice questions (with answers) about anatomy.\n\
\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_anatomy_generative_spanish"
"task_alias": "anatomy_spanish"
"dataset_name": "astronomy"
"description":
"The following are multiple choice questions (with answers) about astronomy.\n\
\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_astronomy_generative_spanish"
"task_alias": "astronomy_spanish"
"dataset_name": "business_ethics"
"description":
"The following are multiple choice questions (with answers) about business\
\ ethics.\n\n"
"tag": "mmlu_other_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_business_ethics_generative_spanish"
"task_alias": "business_ethics_spanish"
"dataset_name": "clinical_knowledge"
"description":
"The following are multiple choice questions (with answers) about clinical\
\ knowledge.\n\n"
"tag": "mmlu_other_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_clinical_knowledge_generative_spanish"
"task_alias": "clinical_knowledge_spanish"
"dataset_name": "college_biology"
"description":
"The following are multiple choice questions (with answers) about college\
\ biology.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_biology_generative_spanish"
"task_alias": "college_biology_spanish"
"dataset_name": "college_chemistry"
"description":
"The following are multiple choice questions (with answers) about college\
\ chemistry.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_chemistry_generative_spanish"
"task_alias": "college_chemistry_spanish"
"dataset_name": "college_computer_science"
"description":
"The following are multiple choice questions (with answers) about college\
\ computer science.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_computer_science_generative_spanish"
"task_alias": "college_computer_science_spanish"
"dataset_name": "college_mathematics"
"description":
"The following are multiple choice questions (with answers) about college\
\ mathematics.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_mathematics_generative_spanish"
"task_alias": "college_mathematics_spanish"
"dataset_name": "college_medicine"
"description":
"The following are multiple choice questions (with answers) about college\
\ medicine.\n\n"
"tag": "mmlu_other_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_medicine_generative_spanish"
"task_alias": "college_medicine_spanish"
"dataset_name": "college_physics"
"description":
"The following are multiple choice questions (with answers) about college\
\ physics.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_physics_generative_spanish"
"task_alias": "college_physics_spanish"
"dataset_name": "computer_security"
"description":
"The following are multiple choice questions (with answers) about computer\
\ security.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_computer_security_generative_spanish"
"task_alias": "computer_security_spanish"
"dataset_name": "conceptual_physics"
"description":
"The following are multiple choice questions (with answers) about conceptual\
\ physics.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_conceptual_physics_generative_spanish"
"task_alias": "conceptual_physics_spanish"
"dataset_name": "econometrics"
"description":
"The following are multiple choice questions (with answers) about econometrics.\n\
\n"
"tag": "mmlu_social_sciences_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_econometrics_generative_spanish"
"task_alias": "econometrics_spanish"
"dataset_name": "electrical_engineering"
"description":
"The following are multiple choice questions (with answers) about electrical\
\ engineering.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_electrical_engineering_generative_spanish"
"task_alias": "electrical_engineering_spanish"
"dataset_name": "elementary_mathematics"
"description":
"The following are multiple choice questions (with answers) about elementary\
\ mathematics.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_elementary_mathematics_generative_spanish"
"task_alias": "elementary_mathematics_spanish"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment