Commit 767c58b9 authored by lintangsutawika's avatar lintangsutawika
Browse files

Merge branch 'big-refactor' into update_docs

parents 3bfbddc4 759da8d5
include: crows_pairs_english.yaml
task: crows_pairs_french_socioeconomic
dataset_name: french
process_docs: !function utils.filter_socio
import datasets
def process_results(doc, results):
lls, _ = zip(*results)
likelihood1, likelihood2 = lls
# Calculate the absolute difference in loglikelihoods
diff = abs(likelihood1 - likelihood2)
# if stereotypical sentence more likely (loglikelihood higher)
# then treat this as predicting stereotyped sentence
acc = 1.0 if likelihood1 > likelihood2 else 0.0
return {"likelihood_diff": diff, "pct_stereotype": acc}
def doc_to_choice(doc):
return [doc["sent_more"], doc["sent_less"]]
def filter_dataset(dataset: datasets.Dataset, bias_type: str) -> datasets.Dataset:
return dataset.filter(lambda example: example["bias_type"].startswith(bias_type))
def filter_race_color(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "race-color")
def filter_socio(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "socioeconomic")
def filter_gender(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "gender")
def filter_age(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "age")
def filter_religion(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "religion")
def filter_disability(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "disability")
def filter_orientation(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "sexual-orientation")
def filter_nationality(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "nationality")
def filter_appearance(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "physical-appearance")
def filter_autre(dataset: datasets.Dataset) -> datasets.Dataset:
return filter_dataset(dataset, "autre")
# GLUE
### Paper
Title: `GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding`
Abstract: https://openreview.net/pdf?id=rJ4km2R5t7
The General Language Understanding Evaluation (GLUE) benchmark is a collection of
resources for training, evaluating, and analyzing natural language understanding
systems. GLUE consists of:
- A benchmark of nine sentence- or sentence-pair language understanding tasks built
on established existing datasets and selected to cover a diverse range of dataset
sizes, text genres, and degrees of difficulty, and
- A diagnostic dataset designed to evaluate and analyze model performance with
respect to a wide range of linguistic phenomena found in natural language.
Homepage: https://gluebenchmark.com/
### Citation
```
@inproceedings{wang-etal-2018-glue,
title = "{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding",
author = "Wang, Alex and
Singh, Amanpreet and
Michael, Julian and
Hill, Felix and
Levy, Omer and
Bowman, Samuel",
booktitle = "Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}",
month = nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W18-5446",
doi = "10.18653/v1/W18-5446",
pages = "353--355",
abstract = "Human ability to understand language is \textit{general, flexible, and robust}. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE, gluebenchmark.com): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it is limited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.",
}
```
### Groups and Tasks
#### Groups
* `glue`: Run all Glue subtasks.
#### Tasks
* `cola`
* `mnli`
* `mrpc`
* `qnli`
* `qqp`
* `rte`
* `sst`
* `wnli`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
group: glue
task: cola
dataset_path: glue
dataset_name: cola
output_type: multiple_choice
training_split: train
validation_split: validation
doc_to_text: "{{sentence}}\nQuestion: Does this sentence make sense?\nAnswer:"
doc_to_target: label
doc_to_choice: ["no", "yes"]
should_decontaminate: true
doc_to_decontamination_query: sentence
metric_list:
- metric: mcc
group: glue
task: mnli
dataset_path: glue
dataset_name: mnli
output_type: multiple_choice
training_split: train
validation_split: validation_matched
doc_to_text: !function utils.doc_to_text
doc_to_target: label
doc_to_choice: ["True", "Neither", "False"]
metric_list:
- metric: acc
include: default.yaml
task: mnli_mismatch
validation_split: validation_mismatched
test_split: test_mismatched
def doc_to_text(doc):
return "{}\nQuestion: {} True, False or Neither?\nAnswer:".format(
doc["premise"],
doc["hypothesis"].strip()
+ ("" if doc["hypothesis"].strip().endswith(".") else "."),
)
group: glue
task: mrpc
dataset_path: glue
dataset_name: mrpc
output_type: multiple_choice
training_split: train
validation_split: validation
doc_to_text: "Sentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\nQuestion: Do both sentences mean the same thing?\nAnswer:"
doc_to_target: label
doc_to_choice: ["no", "yes"]
metric_list:
- metric: acc
- metric: f1
group:
- glue-promptsource
group: glue
task: qnli
dataset_path: glue
dataset_name: qnli
output_type: multiple_choice
training_split: train
validation_split: validation
use_prompt: "promptsource:have all you need"
doc_to_text: "{{question}}\n{{sentence}}\nQuestion: Does this response answer the question?\nAnswer:"
doc_to_target: label
doc_to_choice: ["yes", "no"]
metric_list:
- metric: acc
group: glue
task: qqp
dataset_path: glue
dataset_name: qqp
output_type: multiple_choice
training_split: train
validation_split: validation
test_split: test
doc_to_text: "\nSentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\nAnswer:"
doc_to_target: label
doc_to_choice: ["no", "yes"]
metric_list:
- metric: acc
- metric: f1
group: glue
task: rte
dataset_path: glue
dataset_name: rte
output_type: multiple_choice
training_split: train
validation_split: validation
doc_to_text: "{{sentence1}}\nQuestion: {{sentence2}} True or False?\nAnswer:"
doc_to_target: label
doc_to_choice: ["True", "False"]
metric_list:
- metric: acc
group: glue
task: sst
dataset_path: glue
dataset_name: sst
output_type: multiple_choice
training_split: train
validation_split: validation
doc_to_text: "{{sentence}}\nQuestion: Is this sentence positive or negative?\nAnswer:"
doc_to_target: label
doc_to_choice: ["negative", "positive"]
metric_list:
- metric: acc
group: glue
task: wnli
dataset_path: glue
dataset_name: wnli
output_type: multiple_choice
training_split: train
validation_split: validation
doc_to_text: "{{sentence1}}\nQuestion: {{sentence2}} True or False?\nAnswer:"
doc_to_target: label
doc_to_choice: ["False", "True"]
metric_list:
- metric: acc
......@@ -31,6 +31,19 @@ Homepage: https://github.com/openai/grade-school-math
}
```
### Groups and Tasks
#### Groups
- `math_word_problems`
- `chain_of_thought`
- `self_consistency`
#### Tasks
- `gsm8k_yaml`
- `gsm8k_cot`: GSM8K with Chain-of-Thought
- `gsm8k_cot_self_consistency`: GSM8K with Chain-of-Thought and Self-Consistency
### Checklist
......
group:
- greedy_until
- math_word_problems
task: gsm8k_yaml
dataset_path: gsm8k
......
......@@ -32,7 +32,13 @@ Homepage: https://aghie.github.io/head-qa/
}
```
### Subtasks
### Groups and Tasks
#### Groups
- `headqa`: Evaluates `headqa_en` and `headqa_es`
#### Tasks
* `headqa_en` - English variant of HEAD-QA
* `headqa_es` - Spanish variant of HEAD-QA
......
group:
- multiple_choice
- headqa
task: headqa_en
dataset_path: EleutherAI/headqa
dataset_name: en
......
# Task-name
# HellaSwag
### Paper
Title: `HellaSwag: Can a Machine Really Finish Your Sentence?`,
Abstract: ```Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference?
Title: `HellaSwag: Can a Machine Really Finish Your Sentence?`
Abstract: https://arxiv.org/abs/1905.07830
Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference?
In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models.
Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.```
Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.
Homepage: `https://rowanzellers.com/hellaswag/`
......@@ -21,6 +24,17 @@ Homepage: `https://rowanzellers.com/hellaswag/`
}
```
### Groups and Tasks
#### Groups
- Not part of a group yet
#### Tasks
- `hellaswag`
### Checklist
For adding novel benchmarks/datasets to the library:
......
......@@ -7,9 +7,10 @@ output_type: multiple_choice
training_split: train
validation_split: validation
test_split: null
doc_to_text: "{% set text = activity_label ~ ': ' ~ ctx_a ~ ' ' ~ ctx_b.capitalize() %}{{text|trim|replace(' [title]', '. ')|regex_replace('\\[.*?\\]', '')|replace(' ', ' ')}}"
process_docs: !function utils.process_docs
doc_to_text: "{{query}}"
doc_to_target: "{{label}}"
doc_to_choice: "{{endings|map('trim')|map('replace', ' [title]', '. ')|map('regex_replace', '\\[.*?\\]', '')|map('replace', ' ', ' ')|list}}"
doc_to_choice: "{{choices}}"
metric_list:
- metric: acc
aggregation: mean
......
import datasets
import re
def preprocess(text):
text = text.strip()
# NOTE: Brackets are artifacts of the WikiHow dataset portion of HellaSwag.
text = text.replace(" [title]", ". ")
text = re.sub("\\[.*?\\]", "", text)
text = text.replace(" ", " ")
return text
def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:
def _process_doc(doc):
ctx = doc["ctx_a"] + " " + doc["ctx_b"].capitalize()
out_doc = {
"query": preprocess(doc["activity_label"] + ": " + ctx),
"choices": [preprocess(ending) for ending in doc["endings"]],
"gold": int(doc["label"]),
}
return out_doc
return dataset.map(_process_doc)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment