Commit 835cc40e authored by lintangsutawika's avatar lintangsutawika
Browse files

merged latest and added altworld files

parents 8da401e0 c9bbec6e
......@@ -17,3 +17,5 @@ metric_list:
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
- version: 1.0
......@@ -108,7 +108,7 @@ def _num_cpu_cores():
class _SCROLLSTask(Task):
VERSION = 0
VERSION = 1
DATASET_PATH = "tau/scrolls"
DATASET_NAME = None
PRUNE_TOKENIZERS = None
......
# Social IQA
### Paper
Title: Social IQA: Commonsense Reasoning about Social Interactions
Abstract: https://arxiv.org/abs/1904.09728
> We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations. Social IQa contains 38,000 multiple choice questions for probing emotional and social intelligence in a variety of everyday situations (e.g., Q: "Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this?" A: "Make sure no one else could hear"). Through crowdsourcing, we collect commonsense questions along with correct and incorrect answers about social interactions, using a new framework that mitigates stylistic artifacts in incorrect answers by asking workers to provide the right answer to a different but related question. Empirical results show that our benchmark is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap). Notably, we further establish Social IQa as a resource for transfer learning of commonsense knowledge, achieving state-of-the-art performance on multiple commonsense reasoning tasks (Winograd Schemas, COPA).
Homepage: https://allenai.org/data/socialiqa
### Citation
```
@inproceedings{sap2019social,
title={Social IQa: Commonsense Reasoning about Social Interactions},
author={Sap, Maarten and Rashkin, Hannah and Chen, Derek and Le Bras, Ronan and Choi, Yejin},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={4463--4473},
year={2019}
}
```
### Checklist
For adding novel benchmarks/datasets to the library:
* [X] Is the task an existing benchmark in the literature?
* [X] Have you referenced the original paper that introduced the task?
* [X] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? The original paper doesn't have an associated implementation, but there is an official entry in [BigBench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/social_iqa). I use the same prompting format as BigBench.
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
task: social_iqa
dataset_path: social_i_qa
dataset_name: null
output_type: multiple_choice
training_split: train
validation_split: validation
doc_to_text: "Q: {{context}} {{question}}\nA:"
target_delimiter: " "
doc_to_choice: ["{{answerA}}", "{{answerB}}", "{{answerC}}"]
doc_to_target: "{{label}}"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
- version: 0.0
......@@ -50,7 +50,7 @@ def _squad_agg(key, items):
@register_task("squadv2")
class SQuAD2(Task):
VERSION = 1
VERSION = 2
DATASET_PATH = "squad_v2"
DATASET_NAME = None
......
......@@ -14,3 +14,5 @@ metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
- version: 1.0
......@@ -13,3 +13,5 @@ should_decontaminate: true
doc_to_decontamination_query: passage
metric_list:
- metric: acc
metadata:
- version: 2.0
......@@ -22,3 +22,5 @@ metric_list:
higher_is_better: true
ignore_case: true
ignore_punctuation: true
metadata:
- version: 0.0
......@@ -18,3 +18,5 @@ metric_list:
higher_is_better: true
ignore_case: true
ignore_punctuation: true
metadata:
- version: 0.0
......@@ -13,3 +13,5 @@ metric_list:
- metric: acc
- metric: f1
aggregation: !function "aggregate.cb_multi_fi"
metadata:
- version: 1.0
......@@ -21,3 +21,5 @@ metric_list:
- metric: !function "t5_utils.mean_3class_f1"
aggregation: !function "t5_utils.agg_mean_3class_f1"
higher_is_better: true
metadata:
- version: 0.0
......@@ -11,3 +11,5 @@ doc_to_target: !function utils.doc_to_target
doc_to_choice: !function utils.doc_to_choice
metric_list:
- metric: acc
metadata:
- version: 1.0
......@@ -18,3 +18,5 @@ metric_list:
higher_is_better: true
ignore_case: true
ignore_punctuation: true
metadata:
- version: 0.0
......@@ -11,3 +11,5 @@ doc_to_target: label
doc_to_choice: "['''{{answer}}\\nIs the answer correct? yes''', '''{{answer}}\\nIs the answer correct? no''']"
metric_list:
- metric: acc
metadata:
- version: 2.0
......@@ -19,3 +19,5 @@ metric_list:
- metric: !function t5_utils.em
aggregation: !function t5_utils.agg_em
higher_is_better: true
metadata:
- version: 0.0
......@@ -16,3 +16,5 @@ metric_list:
- metric: em
higher_is_better: True
aggregation: mean
metadata:
- version: 1.0
......@@ -18,3 +18,5 @@ metric_list:
- metric: !function t5_utils.f1
aggregation: !function t5_utils.squad_f1_agg
higher_is_better: true
metadata:
- version: 0.0
......@@ -11,3 +11,5 @@ doc_to_target: label
doc_to_choice: ['True', 'False']
metric_list:
- metric: acc
metadata:
- version: 0.0
......@@ -18,3 +18,5 @@ metric_list:
higher_is_better: true
ignore_case: true
ignore_punctuation: true
metadata:
- version: 0.0
......@@ -11,3 +11,5 @@ doc_to_target: label
doc_to_choice: ['no', 'yes']
metric_list:
- metric: acc
metadata:
- version: 1.0
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment