README.md 2.02 KB
Newer Older
lintangsutawika's avatar
lintangsutawika committed
1
2
3
4
# Task-name

### Paper

lintangsutawika's avatar
update  
lintangsutawika committed
5
6
Title: `Know What You Don’t Know: Unanswerable Questions for SQuAD`
Abstract: https://arxiv.org/abs/1806.03822
lintangsutawika's avatar
lintangsutawika committed
7

lintangsutawika's avatar
update  
lintangsutawika committed
8
9
10
11
12
13
14
15
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset,
consisting of questions posed by crowdworkers on a set of Wikipedia articles,
where the answer to every question is a segment of text, or span, from the
corresponding reading passage, or the question might be unanswerable.
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable
questions written adversarially by crowdworkers to look similar to answerable ones.
To do well on SQuAD2.0, systems must not only answer questions when possible, but
also determine when no answer is supported by the paragraph and abstain from answering.
lintangsutawika's avatar
lintangsutawika committed
16

lintangsutawika's avatar
update  
lintangsutawika committed
17
Homepage: https://rajpurkar.github.io/SQuAD-explorer/
lintangsutawika's avatar
lintangsutawika committed
18
19
20
21
22


### Citation

```
lintangsutawika's avatar
update  
lintangsutawika committed
23
24
25
26
27
28
29
30
@misc{rajpurkar2018know,
    title={Know What You Don't Know: Unanswerable Questions for SQuAD},
    author={Pranav Rajpurkar and Robin Jia and Percy Liang},
    year={2018},
    eprint={1806.03822},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
lintangsutawika's avatar
lintangsutawika committed
31
32
```

lintangsutawika's avatar
update  
lintangsutawika committed
33
### Groups and Tasks
lintangsutawika's avatar
lintangsutawika committed
34

lintangsutawika's avatar
update  
lintangsutawika committed
35
36
37
38
39
40
41
42
#### Groups

* `squadv2_complete`: Runs both `squadv2` and `squadv2_noans_loglikelihood`

#### Tasks

* `squadv2`: `Default squadv2 task`
* `squadv2_noans_loglikelihood`: `Additional task to acquire the probability of model predicting there is no answer`
lintangsutawika's avatar
lintangsutawika committed
43
44
45
46
47
48
49
50
51
52
53
54
55

### Checklist

For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
  * [ ] Have you referenced the original paper that introduced the task?
  * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?


If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?