Commit c1e63555 authored by Yu Shi Jie's avatar Yu Shi Jie
Browse files

Merge branch 'upstream' into 'mmlu-pro'

add tokenizer logs info (#1731)

See merge request shijie.yu/lm-evaluation-harness!4
parents e361687c 42dc2448
"dataset_name": "marxist_theory"
"description": "以下是关于马克思主义理论的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_marxist_theory"
"dataset_name": "modern_chinese"
"description": "以下是关于现代汉语的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_modern_chinese"
"dataset_name": "nutrition"
"description": "以下是关于营养学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_nutrition"
"dataset_name": "philosophy"
"description": "以下是关于哲学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_philosophy"
"dataset_name": "professional_accounting"
"description": "以下是关于专业会计的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_professional_accounting"
"dataset_name": "professional_law"
"description": "以下是关于专业法学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_professional_law"
"dataset_name": "professional_medicine"
"description": "以下是关于专业医学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_professional_medicine"
"dataset_name": "professional_psychology"
"description": "以下是关于专业心理学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_professional_psychology"
"dataset_name": "public_relations"
"description": "以下是关于公共关系的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_public_relations"
"dataset_name": "security_study"
"description": "以下是关于安全研究的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_security_study"
"dataset_name": "sociology"
"description": "以下是关于社会学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_sociology"
"dataset_name": "sports_science"
"description": "以下是关于体育学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_sports_science"
"dataset_name": "traditional_chinese_medicine"
"description": "以下是关于中医中药的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_traditional_chinese_medicine"
"dataset_name": "virology"
"description": "以下是关于病毒学的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_virology"
"dataset_name": "world_history"
"description": "以下是关于世界历史的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_world_history"
"dataset_name": "world_religions"
"description": "以下是关于世界宗教的单项选择题,请直接给出正确答案的选项。\n\n"
"include": "_default_template_yaml"
"task": "cmmlu_world_religions"
# Task-name
### Paper
Title: `COMMONSENSEQA: A Question Answering Challenge Targeting
Commonsense Knowledge`
Abstract: https://arxiv.org/pdf/1811.00937.pdf
CommonsenseQA is a multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers.
It contains 12,102 questions with one correct answer and four distractor answers.
Homepage: https://www.tau-nlp.org/commonsenseqa
### Citation
```
@inproceedings{talmor-etal-2019-commonsenseqa,
title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
author = "Talmor, Alon and
Herzig, Jonathan and
Lourie, Nicholas and
Berant, Jonathan",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1421",
doi = "10.18653/v1/N19-1421",
pages = "4149--4158",
archivePrefix = "arXiv",
eprint = "1811.00937",
primaryClass = "cs",
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `commonsense_qa`: Represents the "random" split from the paper. Uses an MMLU-style prompt, as (presumably) used by Llama evaluations.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
task: commonsense_qa
dataset_path: tau/commonsense_qa
training_split: train
validation_split: validation
output_type: multiple_choice
doc_to_text: "Question: {{ question.strip() }}\nA. {{choices['text'][0]}}\nB. {{choices['text'][1]}}\nC. {{choices['text'][2]}}\nD. {{choices['text'][3]}}\nE. {{choices['text'][4]}}\nAnswer:"
doc_to_target: answerKey
doc_to_choice: ['A', 'B', 'C', 'D', 'E']
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
group: copal_id
tag: copal_id
task: copal_id_standard
task_alias: standard
dataset_path: haryoaw/COPAL
......
group:
tag:
- crows_pairs
- social_bias
- loglikelihood
task: crows_pairs_english
dataset_path: BigScienceBiasEval/crows_pairs_multilingual
dataset_name: english
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment