README.md 1.64 KB
Newer Older
lintangsutawika's avatar
lintangsutawika committed
1
2
3
4
# StoryCloze

### Paper

5
6
Title: `A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories`
Abstract: `https://arxiv.org/abs/1604.01696`
lintangsutawika's avatar
lintangsutawika committed
7

8
Homepage: https://cs.rochester.edu/nlp/rocstories/
lintangsutawika's avatar
lintangsutawika committed
9

10
'Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning. This test requires a system to choose the correct ending to a four-sentence story
lintangsutawika's avatar
lintangsutawika committed
11
12
13
14
15


### Citation

```
16
17
18
19
20
21
22
23
24
25
26
27
28
29
@misc{mostafazadeh2016corpus,
      title={A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories},
      author={Nasrin Mostafazadeh and
      Nathanael Chambers and
      Xiaodong He and
      Devi Parikh and
      Dhruv Batra and
      Lucy Vanderwende and
      Pushmeet Kohli and
      James Allen},
      year={2016},
      eprint={1604.01696},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
lintangsutawika's avatar
lintangsutawika committed
30
31
32
}
```

lintangsutawika's avatar
lintangsutawika committed
33
### Groups and Tasks
lintangsutawika's avatar
lintangsutawika committed
34

lintangsutawika's avatar
lintangsutawika committed
35
36
37
38
39
40
41
42
#### Groups

* `storycloze`

#### Tasks

* `storycloze_2016`
* `storycloze_2018`
lintangsutawika's avatar
lintangsutawika committed
43
44
45
46
47
48
49
50
51
52
53
54
55

### Checklist

For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
  * [ ] Have you referenced the original paper that introduced the task?
  * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?


If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?