Unverified Commit da211969 authored by Jess's avatar Jess Committed by GitHub
Browse files

Merge branch 'EleutherAI:main' into main

parents 1b97e487 801322e0
"dataset_name": "professional_law"
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
"include": "_default_template_yaml"
"task": "ammlu_professional_law"
"dataset_name": "professional_medicine"
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
"include": "_default_template_yaml"
"task": "ammlu_professional_medicine"
"dataset_name": "professional_psychology"
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
"include": "_default_template_yaml"
"task": "ammlu_professional_psychology"
"dataset_name": "public_relations"
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
"include": "_default_template_yaml"
"task": "ammlu_public_relations"
"dataset_name": "security_studies"
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
"include": "_default_template_yaml"
"task": "ammlu_security_studies"
"dataset_name": "sociology"
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
"include": "_default_template_yaml"
"task": "ammlu_sociology"
"dataset_name": "us_foreign_policy"
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
"include": "_default_template_yaml"
"task": "ammlu_us_foreign_policy"
"dataset_name": "virology"
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
"include": "_default_template_yaml"
"task": "ammlu_virology"
"dataset_name": "world_religions"
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
"include": "_default_template_yaml"
"task": "ammlu_world_religions"
# ArabicMMLU
### Paper
Title: ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic
Abstract: https://arxiv.org/abs/2402.12840
The focus of language model evaluation has
transitioned towards reasoning and knowledge intensive tasks, driven by advancements in pretraining large models. While state-of-the-art models are partially trained on large Arabic texts, evaluating their performance in Arabic remains challenging due to the limited availability of relevant datasets. To bridge this gap, we present ArabicMMLU, the first multi-task language understanding benchmark for Arabic language, sourced from school exams across diverse educational levels in different countries spanning North Africa, the Levant, and the Gulf regions. Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA), and is carefully constructed by collaborating with native speakers in the region. Our comprehensive evaluations of 35 models reveal substantial room for improvement, particularly among the best open-source models. Notably, BLOOMZ, mT0, LLama2, and Falcon struggle to achieve a score of 50%, while even the top-performing Arabic centric model only achieves a score of 62.3%.
The authors of the paper conducted studies by varying the language of the initial prompt and answer keys between English and Arabic. However, they set English initial prompts and answer keys as the standard, which is the version implemented in this task.
Homepage: https://github.com/mbzuai-nlp/ArabicMMLU
### Citation
```
@misc{koto2024arabicmmlu,
title={ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic},
author={Fajri Koto and Haonan Li and Sara Shatnawi and Jad Doughman and Abdelrahman Boda Sadallah and Aisha Alraeesi and Khalid Almubarak and Zaid Alyafeai and Neha Sengupta and Shady Shehata and Nizar Habash and Preslav Nakov and Timothy Baldwin},
year={2024},
eprint={2402.12840},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
```
### Groups and Tasks
#### Groups
* `arabicmmlu`: evaluates all ArabicMMLU tasks.
* `arabicmmlu_stem`: evaluates STEM ArabicMMLU tasks.
* `arabicmmlu_stem_social_science`: evaluates social science ArabicMMLU tasks.
* `arabicmmlu_stem_humanities`: evaluates humanities ArabicMMLU tasks.
* `arabicmmlu_stem_language`: evaluates Arabic language ArabicMMLU tasks.
* `arabicmmlu_stem_other`: evaluates other ArabicMMLU tasks.
dataset_path: yazeed7/ArabicMMLU
test_split: test
fewshot_split: dev
fewshot_config:
sampler: first_n
output_type: multiple_choice
doc_to_text: !function utils.doc_to_text
doc_to_choice: !function utils.doc_to_choice
doc_to_target: "Answer Key"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
"""
Take in a YAML, and output all "other" splits with this YAML
"""
import argparse
import logging
import os
import yaml
from tqdm import tqdm
eval_logger = logging.getLogger("lm-eval")
SUBJECTS = {
"Driving Test": "other",
"High Geography": "social_science",
"High History": "humanities",
"Islamic Studies": "humanities",
"Univ Accounting": "social_science",
"Primary General Knowledge": "other",
"Univ Political Science": "social_science",
"Primary Math": "stem",
"Middle General Knowledge": "other",
"High Biology": "stem",
"Primary Natural Science": "stem",
"High Economics": "social_science",
"Middle Natural Science": "stem",
"Middle Geography": "social_science",
"Primary Social Science": "social_science",
"Middle Computer Science": "stem",
"Middle Islamic Studies": "humanities",
"Primary Computer Science": "stem",
"High Physics": "stem",
"Middle Social Science": "social_science",
"Middle Civics": "social_science",
"High Computer Science": "stem",
"General Knowledge": "other",
"High Civics": "social_science",
"Prof Law": "humanities",
"High Islamic Studies": "humanities",
"Primary Arabic Language": "language",
"High Arabic Language": "language",
"Arabic Language (Grammar)": "language",
"Primary History": "humanities",
"Middle History": "humanities",
"Univ Economics": "social_science",
"Arabic Language (General)": "language",
"Univ Computer Science": "stem",
"Primary Islamic Studies": "humanities",
"Primary Geography": "social_science",
"High Philosophy": "humanities",
"Middle Arabic Language": "language",
"Middle Economics": "social_science",
"Univ Management": "other",
}
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--base_yaml_path", default="_default_template_yaml")
parser.add_argument("--save_prefix_path", default="arabicmmlu")
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
# get filename of base_yaml so we can `"include": ` it in our "other" YAMLs.
base_yaml_name = os.path.split(args.base_yaml_path)[-1]
with open(args.base_yaml_path, encoding="utf-8") as f:
base_yaml = yaml.full_load(f)
ALL_CATEGORIES = []
for subject, category in tqdm(SUBJECTS.items()):
if category not in ALL_CATEGORIES:
ALL_CATEGORIES.append(category)
# description = f"The following are multiple choice questions (with answers) about {' '.join(subject.split('_'))}.\n\n"
yaml_dict = {
"include": base_yaml_name,
"group": f"arabicmmlu_{category}",
"group_alias": category.replace("_", " "),
"task": f"arabicmmlu_{subject.lower().replace(' ', '_')}",
"task_alias": subject,
"dataset_name": subject,
# "description": description,
}
file_save_path = (
args.save_prefix_path
+ f"_{subject.lower().replace(' ', '_').replace('(', '').replace(')', '')}.yaml"
)
eval_logger.info(f"Saving yaml for subset {subject} to {file_save_path}")
with open(file_save_path, "w", encoding="utf-8") as yaml_file:
yaml.dump(
yaml_dict,
yaml_file,
allow_unicode=True,
default_style='"',
)
arabicmmlu_subcategories = [f"arabicmmlu_{category}" for category in ALL_CATEGORIES]
file_save_path = args.save_prefix_path + ".yaml"
eval_logger.info(f"Saving benchmark config to {file_save_path}")
with open(file_save_path, "w", encoding="utf-8") as yaml_file:
yaml.dump(
{
"group": "arabicmmlu",
"task": arabicmmlu_subcategories,
},
yaml_file,
indent=4,
default_flow_style=False,
)
group: arabicmmlu
task:
- arabicmmlu_other
- arabicmmlu_social_science
- arabicmmlu_humanities
- arabicmmlu_stem
- arabicmmlu_language
"dataset_name": "Arabic Language (General)"
"group": "arabicmmlu_language"
"group_alias": "language"
"include": "_default_template_yaml"
"task": "arabicmmlu_arabic_language_(general)"
"task_alias": "Arabic Language (General)"
"dataset_name": "Arabic Language (Grammar)"
"group": "arabicmmlu_language"
"group_alias": "language"
"include": "_default_template_yaml"
"task": "arabicmmlu_arabic_language_(grammar)"
"task_alias": "Arabic Language (Grammar)"
"dataset_name": "Driving Test"
"group": "arabicmmlu_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "arabicmmlu_driving_test"
"task_alias": "Driving Test"
"dataset_name": "General Knowledge"
"group": "arabicmmlu_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "arabicmmlu_general_knowledge"
"task_alias": "General Knowledge"
"dataset_name": "High Arabic Language"
"group": "arabicmmlu_language"
"group_alias": "language"
"include": "_default_template_yaml"
"task": "arabicmmlu_high_arabic_language"
"task_alias": "High Arabic Language"
"dataset_name": "High Biology"
"group": "arabicmmlu_stem"
"group_alias": "stem"
"include": "_default_template_yaml"
"task": "arabicmmlu_high_biology"
"task_alias": "High Biology"
"dataset_name": "High Civics"
"group": "arabicmmlu_social_science"
"group_alias": "social science"
"include": "_default_template_yaml"
"task": "arabicmmlu_high_civics"
"task_alias": "High Civics"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment