Unverified Commit 9ae96cdf authored by ZoneTwelve's avatar ZoneTwelve Committed by GitHub
Browse files

TMMLU+ implementation (#1394)



* implementation of TMMLU+

* implemented: TMMLU+

****TMMLU+ : large-scale Traditional chinese Massive Multitask language Understanding****

- 4 categories
    - STEM
    - Social Science
    - Humanities
    - Other

The TMMLU+ dataset, encompassing over 67 subjects and 20160 tasks, is six times larger and more balanced than its predecessor, TMMLU, and includes benchmark results from both closed-source and 20 open-weight Chinese large language models with 1.8B to 72B parameters. However, Traditional Chinese variants continue to underperform compared to major Simplified Chinese models.

```markdown
Total number of tasks in the 'test' sets: 20160
Total number of tasks in the 'validation' sets: 2247
Total number of tasks in the 'train' sets: 335
```

* Remove print from __init__.py

There was my mistake in forgetting to remove the debug print from the code.

* update: move TMMLU+ config generation program into default

* fix: we should use training set as few shots example

* update: README for TMMLU+

* update: a small changes of TMMLU+ README file

* pre-commit run thought

* Add README for TMMLU+ dataset

* run precommit

* trigger precommit again

* trigger precommit again

* isort is fussy

* isort is fussy

* format, again

* oops

* oops

---------
Co-authored-by: default avatarlintang <lintang@eleuther.ai>
Co-authored-by: default avatarhaileyschoelkopf <hailey@eleuther.ai>
parent ff24e992
# TMMLU+
### Paper
Title: `An Improved Traditional Chinese Evaluation Suite for Foundation Model`
Abstract: `We present TMMLU+, a comprehensive dataset designed for the Traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level. Compared to its predecessor, TMMLU, TMMLU+ is six times larger and boasts a more balanced subject distribution. We included benchmark results in TMMLU+ from closed-source models and 24 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Our findings reveal that Traditional Chinese models still trail behind their Simplified Chinese counterparts. Additionally, current large language models have yet to outperform human performance in average scores. We publicly release our dataset and the corresponding benchmark source code.`
Homepage: [https://huggingface.co/datasets/ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus)
### Citation
```
@article{ikala2024improved,
title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},
author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han},
journal={arXiv preprint arXiv:2403.01858},
year={2024}
}
```
### Groups and Tasks
#### Groups
* `tmmluplus`: `The dataset comprises 22,690 multiple-choice questions from 66 subjects ranging from primary to professional level. `
#### Tasks
The following tasks evaluate subjects in the TMMLU+ dataset using loglikelihood-based multiple-choice scoring:
* `tmmluplus_{subject_english}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
dataset_path: ZoneTwelve/tmmluplus # a copy of `ikala/tmmluplus`
test_split: test
fewshot_split: train
fewshot_config:
sampler: first_n
output_type: multiple_choice
process_docs: !function utils.process_docs
doc_to_text: "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:"
doc_to_choice: ["A", "B", "C", "D"]
doc_to_target: answer
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 0.1
"""
Take in a YAML, and output all "other" splits with this YAML
"""
import argparse
import os
import pandas as pd
import yaml
from tqdm import tqdm
# Copy from https://github.com/iKala/ievals/blob/main/ievals/settings.py
# from TMMLU+ offical example
categories = {
"STEM": [
"physics",
"chemistry",
"biology",
"computer science",
"math",
"engineering",
],
"humanities": ["history", "philosophy", "law"],
"social_sciences": [
"politics",
"culture",
"economics",
"geography",
"psychology",
"education",
],
"other": ["other", "business", "health"], # (business, health, misc.)
}
task_list = [
"engineering_math",
"dentistry",
"traditional_chinese_medicine_clinical_medicine",
"clinical_psychology",
"technical",
"culinary_skills",
"mechanical",
"logic_reasoning",
"real_estate",
"general_principles_of_law",
"finance_banking",
"anti_money_laundering",
"ttqav2",
"marketing_management",
"business_management",
"organic_chemistry",
"advance_chemistry",
"physics",
"secondary_physics",
"human_behavior",
"national_protection",
"jce_humanities",
"politic_science",
"agriculture",
"official_document_management",
"financial_analysis",
"pharmacy",
"educational_psychology",
"statistics_and_machine_learning",
"management_accounting",
"introduction_to_law",
"computer_science",
"veterinary_pathology",
"accounting",
"fire_science",
"optometry",
"insurance_studies",
"pharmacology",
"taxation",
"education_(profession_level)",
"economics",
"veterinary_pharmacology",
"nautical_science",
"occupational_therapy_for_psychological_disorders",
"trust_practice",
"geography_of_taiwan",
"physical_education",
"auditing",
"administrative_law",
"basic_medical_science",
"macroeconomics",
"trade",
"chinese_language_and_literature",
"tve_design",
"junior_science_exam",
"junior_math_exam",
"junior_chinese_exam",
"junior_social_studies",
"tve_mathematics",
"tve_chinese_language",
"tve_natural_sciences",
"junior_chemistry",
"music",
"education",
"three_principles_of_people",
"taiwanese_hokkien",
]
subject2name = {}
# subject2category = {}
SUBJECTS = {}
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--base_yaml_path", required=True)
parser.add_argument("--save_prefix_path", default="tmmluplus")
parser.add_argument("--cot_prompt_path", default=None)
parser.add_argument("--task_prefix", default="")
parser.add_argument("--group_prefix", default="")
parser.add_argument("--subject_file", default="subject.tsv")
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
from pathlib import Path
# Initialization
SUBJECT_FILE = Path(__file__).parent / Path(args.subject_file)
df = pd.read_csv(SUBJECT_FILE, delimiter="\t")
for _, row in df.iterrows():
for _c in categories:
if row["subject"] in SUBJECTS:
raise ValueError("Duplicate tasks.")
if row["category"] in categories[_c]: # append new item into SUBJECTS
SUBJECTS[row["subject"]] = _c
subject2name[row["subject"]] = row["name"]
break
# End of SUBJECTS initialization
# get filename of base_yaml so we can `"include": ` it in our "other" YAMLs.
base_yaml_name = os.path.split(args.base_yaml_path)[-1]
with open(args.base_yaml_path) as f:
base_yaml = yaml.full_load(f)
if args.cot_prompt_path is not None:
import json
with open(args.cot_prompt_path) as f:
cot_file = json.load(f)
ALL_CATEGORIES = []
for subject, category in tqdm(SUBJECTS.items()):
if category not in ALL_CATEGORIES:
ALL_CATEGORIES.append(category)
if args.cot_prompt_path is not None:
description = cot_file[subject]
else:
name_of_subject = subject2name[subject].replace("_", " ")
description = f"以下為{name_of_subject}的單選題,請提供正確答案的選項。\n\n"
# description = f"The following are multiple choice questions (with answers) about {' '.join(subject.split('_'))}.\n\n"
yaml_dict = {
"include": base_yaml_name,
"group": f"tmmluplus_{args.task_prefix}_{category}"
if args.task_prefix != ""
else f"tmmluplus_{category}",
"group_alias": category.replace("_", " "),
"task": f"tmmluplus_{args.task_prefix}_{subject}"
if args.task_prefix != ""
else f"tmmluplus_{subject}",
"task_alias": subject.replace("_", " "),
"dataset_name": subject,
"description": description,
}
file_save_path = args.save_prefix_path + f"_{subject}.yaml"
# eval_logger.info(f"Saving yaml for subset {subject} to {file_save_path}")
with open(file_save_path, "w") as yaml_file:
yaml.dump(
yaml_dict,
yaml_file,
# width=float("inf"),
allow_unicode=True,
default_style='"',
)
if args.task_prefix != "":
mmlu_subcategories = [
f"tmmluplus_{args.task_prefix}_{category}" for category in ALL_CATEGORIES
]
else:
mmlu_subcategories = [f"tmmluplus_{category}" for category in ALL_CATEGORIES]
if args.group_prefix != "":
file_save_path = args.group_prefix + ".yaml"
else:
file_save_path = args.save_prefix_path + ".yaml"
# eval_logger.info(f"Saving benchmark config to {file_save_path}")
with open(file_save_path, "w") as yaml_file:
yaml.dump(
{
"group": f"tmmluplus_{args.task_prefix}"
if args.task_prefix != ""
else "tmmluplus",
"task": mmlu_subcategories,
},
yaml_file,
indent=4,
default_flow_style=False,
)
group: tmmluplus
task:
- tmmluplus_other
- tmmluplus_social_sciences
- tmmluplus_humanities
- tmmluplus_STEM
"dataset_name": "accounting"
"description": "以下為會計學的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "tmmluplus_accounting"
"task_alias": "accounting"
"dataset_name": "administrative_law"
"description": "以下為行政法的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_humanities"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "tmmluplus_administrative_law"
"task_alias": "administrative law"
"dataset_name": "advance_chemistry"
"description": "以下為化學的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_STEM"
"group_alias": "STEM"
"include": "_default_template_yaml"
"task": "tmmluplus_advance_chemistry"
"task_alias": "advance chemistry"
"dataset_name": "agriculture"
"description": "以下為農業的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "tmmluplus_agriculture"
"task_alias": "agriculture"
"dataset_name": "anti_money_laundering"
"description": "以下為洗錢防制的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_humanities"
"group_alias": "humanities"
"include": "_default_template_yaml"
"task": "tmmluplus_anti_money_laundering"
"task_alias": "anti money laundering"
"dataset_name": "auditing"
"description": "以下為審計學的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "tmmluplus_auditing"
"task_alias": "auditing"
"dataset_name": "basic_medical_science"
"description": "以下為基礎醫學的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_STEM"
"group_alias": "STEM"
"include": "_default_template_yaml"
"task": "tmmluplus_basic_medical_science"
"task_alias": "basic medical science"
"dataset_name": "business_management"
"description": "以下為企業管理的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "tmmluplus_business_management"
"task_alias": "business management"
"dataset_name": "chinese_language_and_literature"
"description": "以下為國文的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_social_sciences"
"group_alias": "social sciences"
"include": "_default_template_yaml"
"task": "tmmluplus_chinese_language_and_literature"
"task_alias": "chinese language and literature"
"dataset_name": "clinical_psychology"
"description": "以下為臨床心理學的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_social_sciences"
"group_alias": "social sciences"
"include": "_default_template_yaml"
"task": "tmmluplus_clinical_psychology"
"task_alias": "clinical psychology"
"dataset_name": "computer_science"
"description": "以下為資訊工程的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_STEM"
"group_alias": "STEM"
"include": "_default_template_yaml"
"task": "tmmluplus_computer_science"
"task_alias": "computer science"
"dataset_name": "culinary_skills"
"description": "以下為餐旅的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "tmmluplus_culinary_skills"
"task_alias": "culinary skills"
"dataset_name": "dentistry"
"description": "以下為牙醫學的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_other"
"group_alias": "other"
"include": "_default_template_yaml"
"task": "tmmluplus_dentistry"
"task_alias": "dentistry"
"dataset_name": "economics"
"description": "以下為經濟學的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_social_sciences"
"group_alias": "social sciences"
"include": "_default_template_yaml"
"task": "tmmluplus_economics"
"task_alias": "economics"
"dataset_name": "education"
"description": "以下為教育常識的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_social_sciences"
"group_alias": "social sciences"
"include": "_default_template_yaml"
"task": "tmmluplus_education"
"task_alias": "education"
"dataset_name": "education_(profession_level)"
"description": "以下為教育專業的單選題,請提供正確答案的選項。\n\n"
"group": "tmmluplus_social_sciences"
"group_alias": "social sciences"
"include": "_default_template_yaml"
"task": "tmmluplus_education_(profession_level)"
"task_alias": "education (profession level)"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment