Unverified Commit ca3d86d6 authored by Yen-Ting Lin's avatar Yen-Ting Lin Committed by GitHub
Browse files

Add TMLU Benchmark Dataset (#2093)



* add taiwan truthful qa

* add tmlu

* Add .gitignore entries for evals/ and harness_eval_main_log.txt, and add harness_eval.slurm script

* add pega eval and legal eval

* add ccp eval

* Update .gitignore and harness_eval.slurm

* Add trust_remote_code and wandb_args to harness_eval.slurm, and add run_all.sh script

* Add Pega MMLU task and configuration files

* Add new models and update parameters in run_all.sh

* Add UMTCEval tasks and configurations

* Update dataset paths and output path

* Update .gitignore and harness_eval.slurm, and modify _generate_configs.py

* Update SLURM script and add new models

* clean for pr

* Update lm_eval/tasks/tmlu/default/tmlu.yaml
Co-authored-by: default avatarLintang Sutawika <lintang@sutawika.com>

* adjust tag name

* removed group alias from tasks

* format

---------
Co-authored-by: default avatarLintang Sutawika <lintang@sutawika.com>
Co-authored-by: default avatarlintangsutawika <lintang@eleuther.ai>
Co-authored-by: default avatarYen-Ting Adam, Lin <r08944064@csie.ntu.edu.tw>
parent 86edeffa
# TMLU
### Paper
Title: `Measuring Taiwanese Mandarin Language Understanding`
Abstract: `The evaluation of large language models (LLMs) has drawn substantial attention in the field recently. This work focuses on evaluating LLMs in a Chinese context, specifically, for Traditional Chinese which has been largely underrepresented in existing benchmarks. We present TMLU, a holistic evaluation suit tailored for assessing the advanced knowledge and reasoning capability in LLMs, under the context of Taiwanese Mandarin. TMLU consists of an array of 37 subjects across social science, STEM, humanities, Taiwan-specific content, and others, ranging from middle school to professional levels. In addition, we curate chain-of-thought-like few-shot explanations for each subject to facilitate the evaluation of complex reasoning skills. To establish a comprehensive baseline, we conduct extensive experiments and analysis on 24 advanced LLMs. The results suggest that Chinese open-weight models demonstrate inferior performance comparing to multilingual proprietary ones, and open-weight models tailored for Taiwanese Mandarin lag behind the Simplified-Chinese counterparts. The findings indicate great headrooms for improvement, and emphasize the goal of TMLU to foster the development of localized Taiwanese-Mandarin LLMs. We release the benchmark and evaluation scripts for the community to promote future research.`
Homepage: [TMLU Huggingface Dataset](https://huggingface.co/datasets/miulab/tmlu)
### Citation
```
@article{DBLP:journals/corr/abs-2403-20180,
author = {Po{-}Heng Chen and
Sijia Cheng and
Wei{-}Lin Chen and
Yen{-}Ting Lin and
Yun{-}Nung Chen},
title = {Measuring Taiwanese Mandarin Language Understanding},
journal = {CoRR},
volume = {abs/2403.20180},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2403.20180},
doi = {10.48550/ARXIV.2403.20180},
eprinttype = {arXiv},
eprint = {2403.20180},
timestamp = {Wed, 10 Apr 2024 17:37:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2403-20180.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Groups and Tasks
#### Groups
* `tmlu`: `The dataset comprises 2,981 multiple-choice questions from 37 subjects. `
#### Tasks
The following tasks evaluate subjects in the TMLU dataset using loglikelihood-based multiple-choice scoring:
* `tmlu_{subject_english}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
dataset_path: miulab/tmlu
test_split: test
fewshot_split: dev
fewshot_config:
sampler: first_n
output_type: multiple_choice
process_docs: !function utils.process_docs
# doc_to_text: "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:"
# doc_to_choice: ["A", "B", "C", "D"]
doc_to_target: answer
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
version: 0.1
"""
Take in a YAML, and output all "other" splits with this YAML
"""
import argparse
import os
import pandas as pd
import yaml
from tqdm import tqdm
categories = {
"STEM": [
"biology",
"chemistry",
"mathematics" "physics",
"earth science",
],
"humanities": ["Chinese", "history", "Tour", "law"],
"social_sciences": [
"civics",
"geography",
"accounting",
"psychologist",
],
"Taiwan Specific": [
"Taiwan Specific",
],
"other": ["Medicine", "Nutritionist"], # (business, health, misc.)
}
task_list = [
"AST civics",
"AST geography",
"CAP civics",
"CAP geography",
"GSAT civics",
"GSAT geography",
"MOEX Accountant",
"MOEX Clinical psychologist",
"AST biology",
"AST chemistry",
"AST mathematics",
"AST physics",
"CAP biology",
"CAP chemistry",
"CAP earth science",
"CAP mathematics",
"CAP physics",
"GSAT biology",
"GSAT chemistry",
"GSAT earth science",
"GSAT mathematics",
"GSAT physics",
"AST Chinese",
"AST history",
"CAP Chinese",
"CAP history",
"GSAT Chinese",
"GSAT history",
"MOEX Tour guide",
"MOEX Tour leader",
"MOEX Lawyer qualification",
"HB Driving Rule",
"MOEX Teacher qualification",
"MOEX Taiwan tourist resources",
"MOEX Basic Traditional Chinese Medicine",
"MOEX Clinical Traditional Chinese Medicine",
"MOEX Nutritionist",
]
subject2name = {}
subject2num_choice = {}
# subject2category = {}
SUBJECTS = {}
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--base_yaml_path", default="_default_template_yaml")
parser.add_argument("--save_prefix_path", default="tmlu")
parser.add_argument("--cot_prompt_path", default=None)
parser.add_argument("--task_prefix", default="")
parser.add_argument("--group_prefix", default="")
parser.add_argument("--subject_file", default="../subject.tsv")
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
from pathlib import Path
# Initialization
SUBJECT_FILE = Path(__file__).parent / Path(args.subject_file)
df = pd.read_csv(SUBJECT_FILE, delimiter="\t")
for _, row in df.iterrows():
for _c in categories:
if row["subject"] in SUBJECTS:
raise ValueError(f"Duplicate tasks. {row['subject']} already exists.")
if row["category"] in categories[_c]: # append new item into SUBJECTS
SUBJECTS[row["subject"]] = _c
subject2name[row["subject"]] = row["name"]
subject2num_choice[row["subject"]] = row["# Choices"]
break
# End of SUBJECTS initialization
# get filename of base_yaml so we can `"include": ` it in our "other" YAMLs.
base_yaml_name = os.path.split(args.base_yaml_path)[-1]
with open(args.base_yaml_path) as f:
base_yaml = yaml.full_load(f)
if args.cot_prompt_path is not None:
import json
with open(args.cot_prompt_path) as f:
cot_file = json.load(f)
ALL_CATEGORIES = []
for subject, category in tqdm(SUBJECTS.items()):
if category not in ALL_CATEGORIES:
ALL_CATEGORIES.append(category)
if args.cot_prompt_path is not None:
description = cot_file[subject]
else:
name_of_subject = subject2name[subject].replace("_", " ")
description = f"以下為{name_of_subject}的單選題,請提供正確答案的選項。\n\n"
# description = f"The following are multiple choice questions (with answers) about {' '.join(subject.split('_'))}.\n\n"
num_choies = subject2num_choice[subject]
# basic_doc_to_text = "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}"
basic_doc_to_choice = ["A", "B", "C", "D"]
if num_choies == 5:
# basic_doc_to_text += "\nE. {{choices[4]}}"
basic_doc_to_choice.append("E")
if num_choies == 6:
# basic_doc_to_text += "\nE. {{choices[4]}}\nF. {{choices[5]}}"
basic_doc_to_choice += ["E", "F"]
# basic_doc_to_text += "\nAnswer:"
# basic_doc_to_text = "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}{% if choices[4] %}\nE. {{choices[4]}}{% endif %}{% if choices[5] %}\nF. {{choices[5]}}{% endif %}\nAnswer:"
basic_doc_to_text = "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{% endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{% endif %}\nAnswer:"
yaml_dict = {
"include": base_yaml_name,
"group": f"tmlu_{args.task_prefix}_{category}"
if args.task_prefix != ""
else f"tmlu_{category}",
"group_alias": category.replace("_", " "),
"task": f"tmlu_{args.task_prefix}_{subject}"
if args.task_prefix != ""
else f"tmlu_{subject}",
"task_alias": subject.replace("_", " "),
"dataset_name": subject,
"description": description,
# doc_to_text: "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:"
"doc_to_text": basic_doc_to_text,
# doc_to_choice: ["A", "B", "C", "D"]
"doc_to_choice": basic_doc_to_choice,
}
file_save_path = args.save_prefix_path + f"_{subject}.yaml"
# eval_logger.info(f"Saving yaml for subset {subject} to {file_save_path}")
with open(file_save_path, "w") as yaml_file:
yaml.dump(
yaml_dict,
yaml_file,
# width=float("inf"),
allow_unicode=True,
default_style='"',
)
if args.task_prefix != "":
mmlu_subcategories = [
f"tmlu_{args.task_prefix}_{category}" for category in ALL_CATEGORIES
]
else:
mmlu_subcategories = [f"tmlu_{category}" for category in ALL_CATEGORIES]
if args.group_prefix != "":
file_save_path = args.group_prefix + ".yaml"
else:
file_save_path = args.save_prefix_path + ".yaml"
# eval_logger.info(f"Saving benchmark config to {file_save_path}")
with open(file_save_path, "w") as yaml_file:
yaml.dump(
{
"group": f"tmlu_{args.task_prefix}"
if args.task_prefix != ""
else "tmlu",
"task": mmlu_subcategories,
},
yaml_file,
indent=4,
default_flow_style=False,
)
group: tmlu
group_alias: TMLU
task:
- group: tmlu_social_sciences
group_alias: Social Sciences
task:
- tmlu_social_sciences_tasks
aggregate_metric_list:
- metric: acc
- group: tmlu_stem
group_alias: STEM
task:
- tmlu_stem_tasks
aggregate_metric_list:
- metric: acc
- group: tmlu_humanities
group_alias: Humanities
task:
- tmlu_humanities_tasks
aggregate_metric_list:
- metric: acc
- group: tmlu_taiwan_specific
group_alias: Taiwan Specific
task:
- tmlu_taiwan_specific_tasks
aggregate_metric_list:
- metric: acc
- group: tmlu_other
group_alias: Other
task:
- tmlu_other_tasks
aggregate_metric_list:
- metric: acc
aggregate_metric_list:
- metric: acc
metadata:
version: 1
"dataset_name": "AST_biology"
"description": "以下為分科測驗生物的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_stem_tasks"
"include": "_default_template_yaml"
"task": "tmlu_AST_biology"
"task_alias": "AST biology"
"dataset_name": "AST_chemistry"
"description": "以下為分科測驗化學的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
- "E"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_stem_tasks"
"include": "_default_template_yaml"
"task": "tmlu_AST_chemistry"
"task_alias": "AST chemistry"
"dataset_name": "AST_chinese"
"description": "以下為分科測驗國文的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_humanities_tasks"
"include": "_default_template_yaml"
"task": "tmlu_AST_chinese"
"task_alias": "AST chinese"
"dataset_name": "AST_civics"
"description": "以下為分科測驗公民的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_social_sciences_tasks"
"include": "_default_template_yaml"
"task": "tmlu_AST_civics"
"task_alias": "AST civics"
"dataset_name": "AST_geography"
"description": "以下為分科測驗地理的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_social_sciences_tasks"
"include": "_default_template_yaml"
"task": "tmlu_AST_geography"
"task_alias": "AST geography"
"dataset_name": "AST_history"
"description": "以下為分科測驗歷史的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_humanities_tasks"
"include": "_default_template_yaml"
"task": "tmlu_AST_history"
"task_alias": "AST history"
"dataset_name": "CAP_biology"
"description": "以下為會考生物的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_stem_tasks"
"include": "_default_template_yaml"
"task": "tmlu_CAP_biology"
"task_alias": "CAP biology"
"dataset_name": "CAP_chemistry"
"description": "以下為會考化學的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_stem_tasks"
"include": "_default_template_yaml"
"task": "tmlu_CAP_chemistry"
"task_alias": "CAP chemistry"
"dataset_name": "CAP_chinese"
"description": "以下為會考國文的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_humanities_tasks"
"include": "_default_template_yaml"
"task": "tmlu_CAP_chinese"
"task_alias": "CAP chinese"
"dataset_name": "CAP_civics"
"description": "以下為會考公民的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_social_sciences_tasks"
"include": "_default_template_yaml"
"task": "tmlu_CAP_civics"
"task_alias": "CAP civics"
"dataset_name": "CAP_earth_science"
"description": "以下為會考地球科學的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_stem_tasks"
"include": "_default_template_yaml"
"task": "tmlu_CAP_earth_science"
"task_alias": "CAP earth science"
"dataset_name": "CAP_geography"
"description": "以下為會考地理的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_social_sciences_tasks"
"include": "_default_template_yaml"
"task": "tmlu_CAP_geography"
"task_alias": "CAP geography"
"dataset_name": "CAP_history"
"description": "以下為會考歷史的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_humanities_tasks"
"include": "_default_template_yaml"
"task": "tmlu_CAP_history"
"task_alias": "CAP history"
"dataset_name": "GSAT_biology"
"description": "以下為學測生物的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
- "E"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_stem_tasks"
"include": "_default_template_yaml"
"task": "tmlu_GSAT_biology"
"task_alias": "GSAT biology"
"dataset_name": "GSAT_chemistry"
"description": "以下為學測化學的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
- "E"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_stem_tasks"
"include": "_default_template_yaml"
"task": "tmlu_GSAT_chemistry"
"task_alias": "GSAT chemistry"
"dataset_name": "GSAT_chinese"
"description": "以下為學測國文的單選題,請提供正確答案的選項。\n\n"
"doc_to_choice":
- "A"
- "B"
- "C"
- "D"
"doc_to_text": "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\n\
D. {{choices[3]}}{% if choices is defined and choices|length > 4 %}\nE. {{choices[4]}}{%\
\ endif %}{% if choices is defined and choices|length > 5 %}\nF. {{choices[5]}}{%\
\ endif %}\nAnswer:"
"tag": "tmlu_humanities_tasks"
"include": "_default_template_yaml"
"task": "tmlu_GSAT_chinese"
"task_alias": "GSAT chinese"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment