Commit 741a6a69 authored by lintangsutawika's avatar lintangsutawika
Browse files

Merge branch 'main' of https://github.com/EleutherAI/lm-evaluation-harness into mela

parents 494a4515 b536f067
# Task-name
### Paper
Title: `COMMONSENSEQA: A Question Answering Challenge Targeting
Commonsense Knowledge`
Abstract: https://arxiv.org/pdf/1811.00937.pdf
CommonsenseQA is a multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers.
It contains 12,102 questions with one correct answer and four distractor answers.
Homepage: https://www.tau-nlp.org/commonsenseqa
### Citation
```
@inproceedings{talmor-etal-2019-commonsenseqa,
title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
author = "Talmor, Alon and
Herzig, Jonathan and
Lourie, Nicholas and
Berant, Jonathan",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1421",
doi = "10.18653/v1/N19-1421",
pages = "4149--4158",
archivePrefix = "arXiv",
eprint = "1811.00937",
primaryClass = "cs",
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `commonsense_qa`: Represents the "random" split from the paper. Uses an MMLU-style prompt, as (presumably) used by Llama evaluations.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
task: commonsense_qa
dataset_path: tau/commonsense_qa
training_split: train
validation_split: validation
output_type: multiple_choice
doc_to_text: "Question: {{ question.strip() }}\nA. {{choices['text'][0]}}\nB. {{choices['text'][1]}}\nC. {{choices['text'][2]}}\nD. {{choices['text'][3]}}\nE. {{choices['text'][4]}}\nAnswer:"
doc_to_target: answerKey
doc_to_choice: ['A', 'B', 'C', 'D', 'E']
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
group: copal_id
tag: copal_id
task: copal_id_standard
task_alias: standard
dataset_path: haryoaw/COPAL
......
group:
tag:
- crows_pairs
- social_bias
- loglikelihood
task: crows_pairs_english
dataset_path: BigScienceBiasEval/crows_pairs_multilingual
dataset_name: english
......
group: csatqa
task:
- csatqa_gr
- csatqa_li
- csatqa_rch
- csatqa_rcs
- csatqa_rcss
- csatqa_wr
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: true
- metric: acc_norm
aggregation: mean
weight_by_size: true
metadata:
version: 0.0
group: csatqa
dataset_path: EleutherAI/csatqa
test_split: test
output_type: multiple_choice
......
......@@ -2,7 +2,6 @@ import re
import string
import numpy as np
from scipy.optimize import linear_sum_assignment
_ARTICLES = re.compile(r"\b(a|an|the)\b", re.UNICODE)
......@@ -117,6 +116,8 @@ def _align_bags(predicted, gold):
Takes gold and predicted answer sets and first finds the optimal 1-1 alignment
between them and gets maximum metric values over all the answers.
"""
from scipy.optimize import linear_sum_assignment
scores = np.zeros([len(gold), len(predicted)])
for gold_index, gold_item in enumerate(gold):
for pred_index, pred_item in enumerate(predicted):
......
......@@ -12,7 +12,7 @@ class FDA(ConfigurableTask):
DATASET_PATH = "hazyresearch/based-fda"
DATASET_NAME = "default"
def __init__(self):
def __init__(self, **kwargs):
super().__init__(config={"metadata": {"version": self.VERSION}})
def has_training_docs(self):
......
group:
- fld
task: fld_default
dataset_path: hitachi-nlp/FLD.v2
dataset_name: default
......
......@@ -20,9 +20,9 @@ This benchmark is constructed both from openly available datasets, as well as ne
}
```
### Groups and Tasks
### Groups, Tags, and Tasks
#### Groups
#### Tags
- `french_bench`: All tasks (non-perplexity based)
- `french_bench_gen`: All official generative tasks
......
group:
tag:
- french_bench
- french_bench_mc
task: french_bench_arc_challenge
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_extra
description: "D'après l'information dans le contexte donné, quelle est la réponse à la question ?"
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_extra
description: "D'après l'information dans le contexte donné, donne la réponse à la question en citant quelques mots du contexte. Si il est impossible de répondre avec les informations du contexte, répond 'Impossible'."
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_extra
description: "D'après l'information présente dans le contexte, est il possible de répondre à la question ?"
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_gen
description: "D'après l'information dans le contexte donné, quelle question a été posée pour obtenir la réponse donnée ?"
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_gen
description: "D'après l'information dans le contexte donné, donne la réponse à la question en citant quelques mots du contexte. Si il est impossible de répondre avec les informations du contexte, répond 'Impossible'."
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_mc
description: "Répond au mieux en complétant la question avec une des réponses proposées."
......
group:
tag:
- french_bench
- french_bench_mc
task: french_bench_hellaswag
......
include: "_default_template_yaml"
group:
tag:
- french_bench
- french_bench_gen
description: "D'après l'information dans le contexte donné, donne la réponse à la question en citant quelques extraits du contexte."
......
group:
tag:
- french_bench_perplexity
task: french_bench_opus_perplexity
dataset_path: manu/opus100-en-fr
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment