Unverified Commit 7d4e92fa authored by Hailey Schoelkopf's avatar Hailey Schoelkopf Committed by GitHub
Browse files

Merge pull request #659 from EleutherAI/headqa

[Refactor] Add Headqa
parents 884ba785 287f7efc
......@@ -31,8 +31,8 @@ Boxes should be checked iff tasks are implemented in the refactor and tested for
- [x] OpenBookQA
- [ ] SQuADv2 (WIP)
- [x] RACE
- [ ] HeadQA (WIP)
- [ ] MathQA
- [x] HeadQA (WIP)
- [ ] MathQA (WIP)
- [ ] WebQs
- [ ] WSC273
- [x] Winogrande
......
......@@ -19,6 +19,3 @@ metric_list:
- metric: acc_norm
aggregation: mean
higher_is_better: true
- metric: acc_mutual_info
aggregation: mean
higher_is_better: true
# HEAD-QA
### Paper
HEAD-QA: A Healthcare Dataset for Complex Reasoning
https://arxiv.org/pdf/1906.04701.pdf
HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the
Spanish healthcare system, and are challenging even for highly specialized humans. They are designed by the Ministerio
de Sanidad, Consumo y Bienestar Social.
The dataset contains questions about the following topics: medicine, nursing, psychology, chemistry, pharmacology and biology.
Homepage: https://aghie.github.io/head-qa/
### Citation
```
@inproceedings{vilares-gomez-rodriguez-2019-head,
title = "{HEAD}-{QA}: A Healthcare Dataset for Complex Reasoning",
author = "Vilares, David and
G{\'o}mez-Rodr{\'i}guez, Carlos",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1092",
doi = "10.18653/v1/P19-1092",
pages = "960--966",
abstract = "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.",
}
```
### Subtasks
* `headqa_en` - English variant of HEAD-QA
* `headqa_es` - Spanish variant of HEAD-QA
### Checklist
* [x] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?\
* [x] Same as LM Evaluation Harness v0.3.0 implementation
group:
- multiple_choice
task: headqa_en
dataset_path: EleutherAI/headqa
dataset_name: en
output_type: multiple_choice
training_split: train
validation_split: validation
test_split: test
template_aliases: "{% set answer_choices = answers|map(attribute='atext')|list %}{% set gold = ra - 1 %}" # set the list of possible answer choices, and set what this doc's gold label idx is
doc_to_text: "Question: {{qtext}}\nAnswer:"
doc_to_target: "{{answer_choices[gold]}}"
gold_alias: "{{gold}}" # this will be cast to an int.
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
include: headqa_en.yaml
task: headqa_es
dataset_name: es
......@@ -18,6 +18,3 @@ metric_list:
- metric: acc_norm
aggregation: mean
higher_is_better: true
- metric: acc_mutual_info
aggregation: mean
higher_is_better: true
......@@ -18,6 +18,3 @@ metric_list:
- metric: acc_norm
aggregation: mean
higher_is_better: true
- metric: acc_mutual_info
aggregation: mean
higher_is_better: true
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment