Unverified Commit 8bc4afff authored by Boda Sadallah's avatar Boda Sadallah Committed by GitHub
Browse files

add arab_culture task (#3006)

* add arab_culture tasks

* add target_delimeter and remove debugging code
parent 5a481f43
...@@ -16,7 +16,8 @@ ...@@ -16,7 +16,8 @@
| [arabic_leaderboard_complete](arabic_leaderboard_complete/README.md) | A full version of the tasks in the Open Arabic LLM Leaderboard, focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) | | [arabic_leaderboard_complete](arabic_leaderboard_complete/README.md) | A full version of the tasks in the Open Arabic LLM Leaderboard, focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) |
| [arabic_leaderboard_light](arabic_leaderboard_light/README.md) | A light version of the tasks in the Open Arabic LLM Leaderboard (i.e., 10% samples of the test set in the original benchmarks), focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) | | [arabic_leaderboard_light](arabic_leaderboard_light/README.md) | A light version of the tasks in the Open Arabic LLM Leaderboard (i.e., 10% samples of the test set in the original benchmarks), focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) |
| [arabicmmlu](arabicmmlu/README.md) | Localized Arabic version of MMLU with multiple-choice questions from 40 subjects. | Arabic | | [arabicmmlu](arabicmmlu/README.md) | Localized Arabic version of MMLU with multiple-choice questions from 40 subjects. | Arabic |
| [AraDICE](aradice/README.md) | A collection of multiple tasks carefully designed to evaluate dialectal and cultural capabilities in large language models (LLMs). | Arabic | | [ArabCulture](arab_culture/README.md) | Benchmark for evaluating modeles' commonsense cultural knowledge across different 13 different Arab Countries. | Arabic |
[AraDICE](aradice/README.md) | A collection of multiple tasks carefully designed to evaluate dialectal and cultural capabilities in large language models (LLMs). | Arabic |
| [arc](arc/README.md) | Tasks involving complex reasoning over a diverse set of questions. | English | | [arc](arc/README.md) | Tasks involving complex reasoning over a diverse set of questions. | English |
| [arithmetic](arithmetic/README.md) | Tasks involving numerical computations and arithmetic reasoning. | English | | [arithmetic](arithmetic/README.md) | Tasks involving numerical computations and arithmetic reasoning. | English |
| [asdiv](asdiv/README.md) | Tasks involving arithmetic and mathematical reasoning challenges. | English | | [asdiv](asdiv/README.md) | Tasks involving arithmetic and mathematical reasoning challenges. | English |
......
# Arab Culture
### Paper
Title: Commonsense Reasoning in Arab Culture
Abstract: https://arxiv.org/abs/2502.12788
Despite progress in Arabic large language models, such as Jais and AceGPT, their evaluation on commonsense reasoning has largely relied on machine-translated datasets, which lack cultural depth and may introduce Anglocentric biases. Commonsense reasoning is shaped by geographical and cultural contexts, and existing English datasets fail to capture the diversity of the Arab world. To address this, we introduce \datasetname, a commonsense reasoning dataset in Modern Standard Arabic (MSA), covering cultures of 13 countries across the Gulf, Levant, North Africa, and the Nile Valley. The dataset was built from scratch by engaging native speakers to write and validate culturally relevant questions for their respective countries. \datasetname spans 12 daily life domains with 54 fine-grained subtopics, reflecting various aspects of social norms, traditions, and everyday experiences. Zero-shot evaluations show that open-weight language models with up to 32B parameters struggle to comprehend diverse Arab cultures, with performance varying across regions. These findings highlight the need for more culturally aware models and datasets tailored to the Arabic-speaking world.
Homepage: https://github.com/fajri91/ArabicCulture
### Citation
```
@misc{sadallah2025commonsensereasoningarabculture,
title={Commonsense Reasoning in Arab Culture},
author={Abdelrahman Sadallah and Junior Cedric Tonga and Khalid Almubarak and Saeed Almheiri and Farah Atif and Chatrine Qwaider and Karima Kadaoui and Sara Shatnawi and Yaser Alesh and Fajri Koto},
year={2025},
eprint={2502.12788},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.12788},
}
```
### There are two variant of this task: `arab_culture`, and `arab_culture_completion`
- The `arab_culture` is the normal MCQ evaluation type, which appends the answers to the question, and then measure the likelihood of the different choices markers (A,B,C or "أ","ب","ج"). For more info, follow the MMLU style [tempelate](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/mmlu/default/_default_template_yaml#L7-L8)
- The `arab_culture_completion` do the evaluation in a sentence completion manner, by appending each asnwer to the question separetley and chooses the answer with the higher likelihood. See [this](https://github.com/EleutherAI/lm-evaluation-harness/blob/1f9bc88fe61f6bfa36f74e91ce3d59ab5685e4f1/lm_eval/tasks/arc/arc_easy.yaml#L10-L12) for more information
### Groups and Tasks
#### Groups
* `arabculture`: evaluates all ArabCulture tasks.
* `arab_culture_gulf`: evaluates Gulf countires ArabCulture tasks.
* `arab_culture_levant`: evaluates Levant countires ArabCulture tasks.
* `arab_culture_nile_valley`: evaluates Nile Valley countires ArabCulture tasks.
* `arab_culture_north_africa`: evaluates North Africa ArabCulture tasks.
### Evaluation modes
This bechmark allows for different evaluation settings by allowing to adding more extra context for the model:
We have three settings:
* without any information
```
COUNTRY=False
REGION=False
```
* with only region information
```
COUNTRY=False
REGION=True
```
* with region and country information
```
COUNTRY=True
REGION=True
```
**Please add these flags add environment variables.**
* We also allow for prompting in English, which we found to acheive higher results on most of the evaluated models (please refer to our paper).
* To change the language of the prompt, Define the `ARABIC` environment variable.
aggregate_metric_list:
metric: acc
weight_by_size: true
group: arab_culture
metadata:
description: Arab Culture tasks
version: 0
task:
- arab_culture_gulf
- arab_culture_levant
- arab_culture_north_africa
- arab_culture_nile_valley
aggregate_metric_list:
metric: acc
weight_by_size: true
group: arab_culture_gulf
group_alias: Gulf
metadata:
description: arab Culture tasks
version: 0
task:
- arab_culture_gulf_tasks
aggregate_metric_list:
metric: acc
weight_by_size: true
group: arab_culture_levant
group_alias: Levant
metadata:
description: arab Culture tasks
version: 0
task:
- arab_culture_levant_tasks
aggregate_metric_list:
metric: acc
weight_by_size: true
group: arab_culture_nile_valley
group_alias: Nile Valley
metadata:
description: arab Culture tasks
version: 0
task:
- arab_culture_nile_valley_tasks
aggregate_metric_list:
metric: acc
weight_by_size: true
group: arab_culture_north_africa
group_alias: North Africa
metadata:
description: arab Culture tasks
version: 0
task:
- arab_culture_north_africa_tasks
dataset_path: MBZUAI/ArabCulture
test_split: test
fewshot_split: test
fewshot_config:
sampler: first_n
output_type: multiple_choice
doc_to_text: !function utils_mcq.doc_to_text
doc_to_choice: !function utils_mcq.doc_to_choice
doc_to_target: !function utils_mcq.doc_to_target
target_delimiter: ""
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 0.0
"""
Take in a YAML, and output all "other" splits with this YAML
"""
import argparse
import logging
import os
import yaml
from tqdm import tqdm
eval_logger = logging.getLogger("lm-eval")
countries = {
"KSA": "Gulf",
"UAE": "Gulf",
"Yemen": "Gulf",
"Lebanon": "Levant",
"Syria": "Levant",
"Palestine": "Levant",
"Jordan": "Levant",
"Tunisia": "North Africa",
"Algeria": "North Africa",
"Morocco": "North Africa",
"Libya": "North Africa",
"Egypt": "Nile Valley",
"Sudan": "Nile Valley",
}
VERSION = 0
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"--base_yaml_path", default="_default_arab_culture_mcq_template_yaml"
)
parser.add_argument("--save_prefix_path", default="arab_culture")
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
# get filename of base_yaml so we can `"include": ` it in our "other" YAMLs.
base_yaml_name = os.path.split(args.base_yaml_path)[-1]
# with open(args.base_yaml_path, encoding="utf-8") as f:
# base_yaml = yaml.full_load(f)
ALL_REGIONS = []
for country, region in tqdm(countries.items()):
if region not in ALL_REGIONS:
ALL_REGIONS.append(region)
# description = f"The following are multiple choice questions (with answers) about {' '.join(subject.split('_'))}.\n\n"
yaml_dict = {
"include": base_yaml_name,
"tag": f"arab_culture_{region.lower().replace(' ', '_')}_tasks",
"task": f"arab_culture_{country.lower().replace(' ', '_')}",
"task_alias": country,
"dataset_name": country,
# "description": description,
}
file_save_path = (
args.save_prefix_path
+ f"_{country.lower().replace(' ', '_').replace('(', '').replace(')', '')}.yaml"
)
eval_logger.info(f"Saving yaml for subset {country} to {file_save_path}")
with open(file_save_path, "w", encoding="utf-8") as yaml_file:
yaml.dump(
yaml_dict,
yaml_file,
allow_unicode=True,
default_style='"',
)
arab_culture_mcq_regions = [
f"arab_culture_{region.lower().replace(' ', '_')}" for region in ALL_REGIONS
]
file_save_path = args.save_prefix_path + ".yaml"
eval_logger.info(f"Saving benchmark config to {file_save_path}")
for region in ALL_REGIONS:
file_save_path = (
args.save_prefix_path + f"_{region.lower().replace(' ', '_')}.yaml"
)
eval_logger.info(f"Saving yaml for subset {region} to {file_save_path}")
with open("_" + file_save_path, "w", encoding="utf-8") as yaml_file:
yaml.dump(
{
"group": f"arab_culture_{region.lower().replace(' ', '_')}",
"group_alias": region,
"task": [f"arab_culture_{region.lower().replace(' ', '_')}_tasks"],
"aggregate_metric_list": {"metric": "acc", "weight_by_size": True},
"metadata": {
"description": "arab Culture tasks",
"version": VERSION,
},
},
yaml_file,
indent=4,
default_flow_style=False,
)
file_save_path = args.save_prefix_path + ".yaml"
with open("_" + file_save_path, "w", encoding="utf-8") as yaml_file:
yaml.dump(
{
"group": "arab_culture",
"task": arab_culture_mcq_regions,
"aggregate_metric_list": {"metric": "acc", "weight_by_size": True},
"metadata": {"description": "Arab Culture tasks", "version": VERSION},
},
yaml_file,
indent=4,
default_flow_style=False,
)
"dataset_name": "Algeria"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_north_africa_tasks"
"task": "arab_culture_algeria"
"task_alias": "Algeria"
"dataset_name": "Egypt"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_nile_valley_tasks"
"task": "arab_culture_egypt"
"task_alias": "Egypt"
"dataset_name": "Jordan"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_levant_tasks"
"task": "arab_culture_jordan"
"task_alias": "Jordan"
"dataset_name": "KSA"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_gulf_tasks"
"task": "arab_culture_ksa"
"task_alias": "KSA"
"dataset_name": "Lebanon"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_levant_tasks"
"task": "arab_culture_lebanon"
"task_alias": "Lebanon"
"dataset_name": "Libya"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_north_africa_tasks"
"task": "arab_culture_libya"
"task_alias": "Libya"
"dataset_name": "Morocco"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_north_africa_tasks"
"task": "arab_culture_morocco"
"task_alias": "Morocco"
"dataset_name": "Palestine"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_levant_tasks"
"task": "arab_culture_palestine"
"task_alias": "Palestine"
"dataset_name": "Sudan"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_nile_valley_tasks"
"task": "arab_culture_sudan"
"task_alias": "Sudan"
"dataset_name": "Syria"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_levant_tasks"
"task": "arab_culture_syria"
"task_alias": "Syria"
"dataset_name": "Tunisia"
"include": "_default_arab_culture_mcq_template_yaml"
"tag": "arab_culture_north_africa_tasks"
"task": "arab_culture_tunisia"
"task_alias": "Tunisia"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment