Unverified Commit a0466f01 authored by Irina Proskurina's avatar Irina Proskurina Committed by GitHub
Browse files

Add Moral Stories (#2653)

* Add moral stories task

* Add moral stories task

* Create README.md

* Update README.md

* Update line endings in moral_stories files
parent 5c006ed4
...@@ -86,6 +86,7 @@ ...@@ -86,6 +86,7 @@
| [mmlu_pro](mmlu_pro/README.md) | A refined set of MMLU, integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. | English | | [mmlu_pro](mmlu_pro/README.md) | A refined set of MMLU, integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. | English |
| [mmlusr](mmlusr/README.md) | Variation of MMLU designed to be more rigorous. | English | | [mmlusr](mmlusr/README.md) | Variation of MMLU designed to be more rigorous. | English |
| model_written_evals | Evaluation tasks auto-generated for evaluating a collection of AI Safety concerns. | | | model_written_evals | Evaluation tasks auto-generated for evaluating a collection of AI Safety concerns. | |
| [moral_stories](moral_stories/README.md) | A crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations. | English
| [mutual](mutual/README.md) | A retrieval-based dataset for multi-turn dialogue reasoning. | English | | [mutual](mutual/README.md) | A retrieval-based dataset for multi-turn dialogue reasoning. | English |
| [nq_open](nq_open/README.md) | Open domain question answering tasks based on the Natural Questions dataset. | English | | [nq_open](nq_open/README.md) | Open domain question answering tasks based on the Natural Questions dataset. | English |
| [okapi/arc_multilingual](okapi/arc_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (31 languages) **Machine Translated.** | | [okapi/arc_multilingual](okapi/arc_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (31 languages) **Machine Translated.** |
......
# Moral Stories
### Paper
Title: `Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences`
Abstract: `https://aclanthology.org/2021.emnlp-main.54/`
Moral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories:
- Norm: A guideline for social conduct generally observed by most people in everyday situations.
- Situation: Setting of the story that introduces story participants and describes their environment.
- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.
- Normative action: An action by the actor that fulfills the intention and observes the norm.
- Normative consequence: Possible effect of the normative action on the actor's environment.
- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.
- Divergent consequence: Possible effect of the divergent action on the actor's environment.
Homepage: `https://github.com/demelin/moral_stories`
The implementation is based on the paper "Histoires Morales: A French Dataset for Assessing Moral Alignment." The source code is available at: `https://github.com/upunaprosk/histoires-morales`.
### Citation
```
@inproceedings{emelin-etal-2021-moral,
title = "Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences",
author = "Emelin, Denis and
Le Bras, Ronan and
Hwang, Jena D. and
Forbes, Maxwell and
Choi, Yejin",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.54",
doi = "10.18653/v1/2021.emnlp-main.54",
pages = "698--718",
abstract = "In social settings, much of human behavior is governed by unspoken rules of conduct rooted in societal norms. For artificial systems to be fully integrated into social environments, adherence to such norms is a central prerequisite. To investigate whether language generation models can serve as behavioral priors for systems deployed in social settings, we evaluate their ability to generate action descriptions that achieve predefined goals under normative constraints. Moreover, we examine if models can anticipate likely consequences of actions that either observe or violate known norms, or explain why certain actions are preferable by generating relevant norm hypotheses. For this purpose, we introduce Moral Stories, a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning. Finally, we propose decoding strategies that combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines.",
}
```
### Groups, Tags, and Tasks
#### Groups
* Not part of a group yet
#### Tags
* `moral_stories`: `Evaluation of the likelihoods of moral actions versus immoral actions. Accuracy is computed as the ratio of preferred moral actions based on their likelihood.`
#### Tasks
* `moral_stories.yaml`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
tag:
- moral_stories
task: moral_stories
dataset_path: demelin/moral_stories
dataset_name: full
output_type: multiple_choice
test_split: train
process_docs: !function utils.process_docs
doc_to_text: "{{query}}"
doc_to_target: "{{label}}"
doc_to_choice: "choices"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
import datasets
def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:
def _process_doc(doc):
ctx = (
doc["norm"].capitalize()
+ " "
+ doc["situation"].capitalize()
+ " "
+ doc["intention"].capitalize()
)
choices = [doc["moral_action"], doc["immoral_action"]]
out_doc = {
"query": ctx,
"choices": choices,
"label": 0,
}
return out_doc
return dataset.map(_process_doc)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment