Commit abd17276 authored by Baber's avatar Baber
Browse files

Merge branch 'smolrefact' into tasklist

# Conflicts:
#	lm_eval/__main__.py
#	lm_eval/api/group.py
#	lm_eval/api/task.py
#	lm_eval/evaluator_utils.py
#	lm_eval/tasks/__init__.py
#	lm_eval/utils.py
#	pyproject.toml
parents 00afd536 70314843
...@@ -71,7 +71,7 @@ def list_fewshot_samples() -> list[dict]: ...@@ -71,7 +71,7 @@ def list_fewshot_samples() -> list[dict]:
] ]
def process_results(doc: dict, results: List[str]) -> Dict[str, int]: def process_results(doc: dict, results: list[str]) -> dict[str, int]:
candidates = results[0] candidates = results[0]
unnormalized_answer = get_unnormalized_answer(candidates) unnormalized_answer = get_unnormalized_answer(candidates)
...@@ -83,14 +83,17 @@ def process_results(doc: dict, results: List[str]) -> Dict[str, int]: ...@@ -83,14 +83,17 @@ def process_results(doc: dict, results: List[str]) -> Dict[str, int]:
retval = 0 retval = 0
# math_verify # math_verify
res = verify(parse(doc["answer"]), parse(candidates)) _mvres = verify(
mathval = 1 if res else 0 gold=parse(doc["solution"]),
target=parse(candidates),
)
mathval = 1 if _mvres else 0
results = { res = {
"exact_match": retval, "exact_match": retval,
"math_verify": mathval, "math_verify": mathval,
} }
return results return res
def last_boxed_only_string(string: str) -> Optional[str]: def last_boxed_only_string(string: str) -> Optional[str]:
......
...@@ -36,56 +36,56 @@ Homepage: `https://github.com/facebookresearch/MLQA` ...@@ -36,56 +36,56 @@ Homepage: `https://github.com/facebookresearch/MLQA`
#### Tasks #### Tasks
Tasks of the form `mlqa_context-lang_question-lang.yaml` Tasks of the form `mlqa_context-lang_question-lang`
* `mlqa_ar_ar.yaml` * `mlqa_ar_ar`
* `mlqa_ar_de.yaml` * `mlqa_ar_de`
* `mlqa_ar_vi.yaml` * `mlqa_ar_vi`
* `mlqa_ar_zh.yaml` * `mlqa_ar_zh`
* `mlqa_ar_en.yaml` * `mlqa_ar_en`
* `mlqa_ar_es.yaml` * `mlqa_ar_es`
* `mlqa_ar_hi.yaml` * `mlqa_ar_hi`
* `mlqa_de_ar.yaml` * `mlqa_de_ar`
* `mlqa_de_de.yaml` * `mlqa_de_de`
* `mlqa_de_vi.yaml` * `mlqa_de_vi`
* `mlqa_de_zh.yaml` * `mlqa_de_zh`
* `mlqa_de_en.yaml` * `mlqa_de_en`
* `mlqa_de_es.yaml` * `mlqa_de_es`
* `mlqa_de_hi.yaml` * `mlqa_de_hi`
* `mlqa_vi_ar.yaml` * `mlqa_vi_ar`
* `mlqa_vi_de.yaml` * `mlqa_vi_de`
* `mlqa_vi_vi.yaml` * `mlqa_vi_vi`
* `mlqa_vi_zh.yaml` * `mlqa_vi_zh`
* `mlqa_vi_en.yaml` * `mlqa_vi_en`
* `mlqa_vi_es.yaml` * `mlqa_vi_es`
* `mlqa_vi_hi.yaml` * `mlqa_vi_hi`
* `mlqa_zh_ar.yaml` * `mlqa_zh_ar`
* `mlqa_zh_de.yaml` * `mlqa_zh_de`
* `mlqa_zh_vi.yaml` * `mlqa_zh_vi`
* `mlqa_zh_zh.yaml` * `mlqa_zh_zh`
* `mlqa_zh_en.yaml` * `mlqa_zh_en`
* `mlqa_zh_es.yaml` * `mlqa_zh_es`
* `mlqa_zh_hi.yaml` * `mlqa_zh_hi`
* `mlqa_en_ar.yaml` * `mlqa_en_ar`
* `mlqa_en_de.yaml` * `mlqa_en_de`
* `mlqa_en_vi.yaml` * `mlqa_en_vi`
* `mlqa_en_zh.yaml` * `mlqa_en_zh`
* `mlqa_en_en.yaml` * `mlqa_en_en`
* `mlqa_en_es.yaml` * `mlqa_en_es`
* `mlqa_en_hi.yaml` * `mlqa_en_hi`
* `mlqa_es_ar.yaml` * `mlqa_es_ar`
* `mlqa_es_de.yaml` * `mlqa_es_de`
* `mlqa_es_vi.yaml` * `mlqa_es_vi`
* `mlqa_es_zh.yaml` * `mlqa_es_zh`
* `mlqa_es_en.yaml` * `mlqa_es_en`
* `mlqa_es_es.yaml` * `mlqa_es_es`
* `mlqa_es_hi.yaml` * `mlqa_es_hi`
* `mlqa_hi_ar.yaml` * `mlqa_hi_ar`
* `mlqa_hi_de.yaml` * `mlqa_hi_de`
* `mlqa_hi_vi.yaml` * `mlqa_hi_vi`
* `mlqa_hi_zh.yaml` * `mlqa_hi_zh`
* `mlqa_hi_en.yaml` * `mlqa_hi_en`
* `mlqa_hi_es.yaml` * `mlqa_hi_es`
* `mlqa_hi_hi.yaml` * `mlqa_hi_hi`
### Checklist ### Checklist
......
# Task-name
### Paper
Title: `Are We Donewith MMLU?`
Abstract: `https://arxiv.org/pdf/2406.04127`
`The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more, in Spanish`
Homepage: `https://huggingface.co/datasets/edinburgh-dawg/mmlu-redux-2.0`
### Citation
```
BibTeX
@misc{edinburgh2024mmlu,
title={Are We Done with MMLU?},
author={Aryo Pradipta Gema and Joshua Ong Jun Leang and Giwon Hong and Alessio Devoto and
Alberto Carlo Maria Mancino and Rohit Saxena and Xuanli He and Yu Zhao and Xiaotang Du and
MohammadRezaGhasemi Madani and Claire Barale and Robert McHardy and Joshua Harris and
Jean Kaddour and Emile van Krieken and Pasquale Minervini},
year={2025},
eprint={2406.04127},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups, Tags, and Tasks
#### Groups
- `stem`
- `other`
- `social sciences`
- `humanities`
#### Tasks
- `mmlu_stem_generative_spanish`
- `mmlu_other_generative_spanish`
- `mmlu_social_sciences_generative_spanish`
- `mmlu_humanities_generative_spanish`
### Checklist
For adding novel benchmarks/datasets to the library:
- [x] Is the task an existing benchmark in the literature?
- [x] Have you referenced the original paper that introduced the task?
- [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- [ ] Is the "Main" variant of this task clearly denoted?
- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
- [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
ver 1: PR #2705
First implementation
dataset_path: "amias-mx/mmlu-redux-2.0-spanish"
test_split: test
dataset_kwargs:
trust_remote_code: true
output_type: generate_until
doc_to_text: "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nPor favor, responde con la letra correcta (A, B, C o D) sin absolutamente nada adicional, solo la letra correcta:"
doc_to_target: "{{['A','B','C','D'][answer]}}"
target_delimiter: ":"
generation_kwargs:
until:
- "</s>"
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: true
filter_list:
- name: default
filter:
- function: regex
regex_pattern: "([ABCD])"
- function: take_first
metadata:
version: 3.0
group: mmlu_redux_spanish_generative
group_alias: mmlu_redux_spanish (generative)
task:
- group: stem_spanish
task:
- mmlu_stem_generative_spanish
aggregate_metric_list:
- metric: exact_match
weight_by_size: true
- group: other_spanish
task:
- mmlu_other_generative_spanish
aggregate_metric_list:
- metric: exact_match
weight_by_size: true
- group: social sciences_spanish
task:
- mmlu_social_sciences_generative_spanish
aggregate_metric_list:
- metric: exact_match
weight_by_size: true
# - group: humanities_spanish
# task:
# - mmlu_humanities_generative_spanish
# aggregate_metric_list:
# - metric: exact_match
# weight_by_size: true
aggregate_metric_list:
- aggregation: mean
metric: exact_match
weight_by_size: true
metadata:
version: 3
"dataset_name": "abstract_algebra"
"description":
"The following are multiple choice questions (with answers) about abstract\
\ algebra.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_abstract_algebra_generative_spanish"
"task_alias": "abstract_algebra_spanish"
"dataset_name": "anatomy"
"description":
"The following are multiple choice questions (with answers) about anatomy.\n\
\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_anatomy_generative_spanish"
"task_alias": "anatomy_spanish"
"dataset_name": "astronomy"
"description":
"The following are multiple choice questions (with answers) about astronomy.\n\
\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_astronomy_generative_spanish"
"task_alias": "astronomy_spanish"
"dataset_name": "business_ethics"
"description":
"The following are multiple choice questions (with answers) about business\
\ ethics.\n\n"
"tag": "mmlu_other_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_business_ethics_generative_spanish"
"task_alias": "business_ethics_spanish"
"dataset_name": "clinical_knowledge"
"description":
"The following are multiple choice questions (with answers) about clinical\
\ knowledge.\n\n"
"tag": "mmlu_other_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_clinical_knowledge_generative_spanish"
"task_alias": "clinical_knowledge_spanish"
"dataset_name": "college_biology"
"description":
"The following are multiple choice questions (with answers) about college\
\ biology.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_biology_generative_spanish"
"task_alias": "college_biology_spanish"
"dataset_name": "college_chemistry"
"description":
"The following are multiple choice questions (with answers) about college\
\ chemistry.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_chemistry_generative_spanish"
"task_alias": "college_chemistry_spanish"
"dataset_name": "college_computer_science"
"description":
"The following are multiple choice questions (with answers) about college\
\ computer science.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_computer_science_generative_spanish"
"task_alias": "college_computer_science_spanish"
"dataset_name": "college_mathematics"
"description":
"The following are multiple choice questions (with answers) about college\
\ mathematics.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_mathematics_generative_spanish"
"task_alias": "college_mathematics_spanish"
"dataset_name": "college_medicine"
"description":
"The following are multiple choice questions (with answers) about college\
\ medicine.\n\n"
"tag": "mmlu_other_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_medicine_generative_spanish"
"task_alias": "college_medicine_spanish"
"dataset_name": "college_physics"
"description":
"The following are multiple choice questions (with answers) about college\
\ physics.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_college_physics_generative_spanish"
"task_alias": "college_physics_spanish"
"dataset_name": "computer_security"
"description":
"The following are multiple choice questions (with answers) about computer\
\ security.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_computer_security_generative_spanish"
"task_alias": "computer_security_spanish"
"dataset_name": "conceptual_physics"
"description":
"The following are multiple choice questions (with answers) about conceptual\
\ physics.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_conceptual_physics_generative_spanish"
"task_alias": "conceptual_physics_spanish"
"dataset_name": "econometrics"
"description":
"The following are multiple choice questions (with answers) about econometrics.\n\
\n"
"tag": "mmlu_social_sciences_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_econometrics_generative_spanish"
"task_alias": "econometrics_spanish"
"dataset_name": "electrical_engineering"
"description":
"The following are multiple choice questions (with answers) about electrical\
\ engineering.\n\n"
"tag": "mmlu_stem_generative_spanish"
"include": "_default_template_spanish_yaml"
"task": "mmlu_electrical_engineering_generative_spanish"
"task_alias": "electrical_engineering_spanish"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment