-**use_prompt** (`str`, *optional*) — Name of prompt in promptsource to use. if defined, will overwrite doc_to_text and doc_to_target and make template_aliases unused.
-**use_prompt** (`str`, *optional*) — Name of prompt in promptsource to use. if defined, will overwrite doc_to_text and doc_to_target and make template_aliases unused.
-**doc_to_text** (`Union[Callable, str]`, *optional*) — Jinja2, f-string, or function to process a sample into the appropriate input for the model
-**doc_to_text** (`Union[Callable, str]`, *optional*) — Jinja2, f-string, or function to process a sample into the appropriate input for the model
-**doc_to_target** (`Union[Callable, str]`, *optional*) — Jinja2, f-string, or function to process a sample into the appropriate target output for the model.
-**doc_to_target** (`Union[Callable, str]`, *optional*) — Jinja2, f-string, or function to process a sample into the appropriate target output for the model.
-**doc_to_choice** (`Union[Callable, str]`, *optional*) — Jinja2, f-string, or function to process a sample into possible choices for `multiple_choice`
-**gold_alias** (`str`, *optional*, defaults to None) — if provided, used to generate the reference answer that is scored against. Used in cases where `doc_to_target` should be the "target string" format appended to each example's input for a fewshot exemplar, so doc_to_target is used for fewshot examples, but the input to the metric function as `gold` is from `gold_alias`.
-**gold_alias** (`str`, *optional*, defaults to None) — if provided, used to generate the reference answer that is scored against. Used in cases where `doc_to_target` should be the "target string" format appended to each example's input for a fewshot exemplar, so doc_to_target is used for fewshot examples, but the input to the metric function as `gold` is from `gold_alias`.
-**fewshot_delimiter** (`str`, *optional*, defaults to "\n\n") — String to insert between few-shot examples.
-**fewshot_delimiter** (`str`, *optional*, defaults to "\n\n") — String to insert between few-shot examples.
-**target_delimiter** (`str`, *optional*, defaults to `" "`) — String to insert between input and target output for the datapoint being tested.
-**target_delimiter** (`str`, *optional*, defaults to `" "`) — String to insert between input and target output for the datapoint being tested.
title = "Adversarial {NLI}: A New Benchmark for Natural Language Understanding",
author = "Nie, Yixin and
Williams, Adina and
Dinan, Emily and
Bansal, Mohit and
Weston, Jason and
Kiela, Douwe",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
year = "2020",
publisher = "Association for Computational Linguistics",
}
```
### Subtasks
List or describe tasks defined in this folder, and their names here:
*`anli_r1`: The data collected adversarially in the first round.
*`anli_r2`: The data collected adversarially in the second round, after training on the previous round's data.
*`anli_r3`: The data collected adversarially in the third round, after training on the previous multiple rounds of data.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
template_aliases:"{%setanswer_choices=choices['text']%}{%setgold=choices.label.index(answerKey)%}"# set the list of possible answer choices, and set what this doc's gold answer is (set what ds column used, and what)
doc_to_text:"Question:{{question}}\nAnswer:"
doc_to_target:"{{answer_choices[gold]}}"
gold_alias:"{{gold}}"# this will be cast to an int.
template_aliases:"{%setanswer_choices=choices['text']%}{%setgold=choices.label.index(answerKey)%}"# set the list of possible answer choices, and set what this doc's gold answer is (set what ds column used, and what)
template_aliases:"{%setanswer_choices=answers|map(attribute='atext')|list%}{%setgold=ra-1%}"# set the list of possible answer choices, and set what this doc's gold label idx is
doc_to_text:"Question:{{qtext}}\nAnswer:"
doc_to_text:"Question:{{qtext}}\nAnswer:"
doc_to_target:"{{answer_choices[gold]}}"
doc_to_target:"{{ra-1}}"
gold_alias:"{{gold}}"# this will be cast to an int.
doc_to_choice:"{{answers|map(attribute='atext')|list}}"# this will be cast to an int.