Unverified Commit d8f96404 authored by Lintang Sutawika's avatar Lintang Sutawika Committed by GitHub
Browse files

Update README.md

parent d1a44c85
# Task-name # [Adversarial NLI: A New Benchmark for Natural Language Understanding](https://arxiv.org/abs/1910.14599)
### Paper ## Abstract
Title: `Adversarial NLI: A New Benchmark for Natural Language Understanding`
Abstract: `https://arxiv.org/pdf/1910.14599.pdf`
Adversarial NLI (ANLI) is a dataset collected via an iterative, adversarial Adversarial NLI (ANLI) is a dataset collected via an iterative, adversarial
human-and-model-in-the-loop procedure. It consists of three rounds that progressively human-and-model-in-the-loop procedure. It consists of three rounds that progressively
increase in difficulty and complexity, and each question-answer includes annotator- increase in difficulty and complexity, and each question-answer includes annotator-
provided explanations. provided explanations.
Homepage: `https://github.com/facebookresearch/anli` ## Subtasks
List or describe tasks defined in this folder, and their names here:
* `anli_r1`: The data collected adversarially in the first round.
* `anli_r2`: The data collected adversarially in the second round, after training on the previous round's data.
* `anli_r3`: The data collected adversarially in the third round, after training on the previous multiple rounds of data.
## Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
### Citation ## Citation
``` ```
@inproceedings{nie-etal-2020-adversarial, @inproceedings{nie-etal-2020-adversarial,
...@@ -31,22 +44,6 @@ Homepage: `https://github.com/facebookresearch/anli` ...@@ -31,22 +44,6 @@ Homepage: `https://github.com/facebookresearch/anli`
} }
``` ```
### Subtasks ## Links
List or describe tasks defined in this folder, and their names here: Homepage: https://github.com/facebookresearch/anli
* `anli_r1`: The data collected adversarially in the first round.
* `anli_r2`: The data collected adversarially in the second round, after training on the previous round's data.
* `anli_r3`: The data collected adversarially in the third round, after training on the previous multiple rounds of data.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment