Commit b13753cd authored by haileyschoelkopf's avatar haileyschoelkopf
Browse files

Merge branch 'main' into fix-task-table

parents 8ea9c59d 5c25dd55
"dataset_name": "zho_Hant"
"fewshot_split": "zho_Hant"
"include": "_default_template_yaml"
"task": "belebele_zho_Hant"
"test_split": "zho_Hant"
"dataset_name": "zsm_Latn"
"fewshot_split": "zsm_Latn"
"include": "_default_template_yaml"
"task": "belebele_zsm_Latn"
"test_split": "zsm_Latn"
"dataset_name": "zul_Latn"
"fewshot_split": "zul_Latn"
"include": "_default_template_yaml"
"task": "belebele_zul_Latn"
"test_split": "zul_Latn"
......@@ -17,3 +17,5 @@ filter_list:
- function: "regex"
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)"
- function: "take_first"
metadata:
version: 1.0
......@@ -9,3 +9,5 @@ generation_kwargs:
- "</s>"
do_sample: false
temperature: 0.0
metadata:
version: 1.0
# MultiMedQA (multiple-choice subset)
### Paper
Title: Large Language Models Encode Clinical Knowledge
Abstract: https://arxiv.org/abs/2212.13138
A benchmark combining four existing multiple-choice question answering datasets spanning professional medical exams and research queries.
### Citation
```
@Article{Singhal2023,
author={Singhal, Karan and Azizi, Shekoofeh and Tu, Tao and Mahdavi, S. Sara and Wei, Jason and Chung, Hyung Won and Scales, Nathan and Tanwani, Ajay and Cole-Lewis, Heather and Pfohl, Stephen and Payne, Perry and Seneviratne, Martin and Gamble, Paul and Kelly, Chris and Babiker, Abubakr and Sch{\"a}rli, Nathanael and Chowdhery, Aakanksha and Mansfield, Philip and Demner-Fushman, Dina and Ag{\"u}era y Arcas, Blaise and Webster, Dale and Corrado, Greg S. and Matias, Yossi and Chou, Katherine and Gottweis, Juraj and Tomasev, Nenad and Liu, Yun and Rajkomar, Alvin and Barral, Joelle and Semturs, Christopher and Karthikesalingam, Alan and Natarajan, Vivek},
title={Large language models encode clinical knowledge},
journal={Nature},
year={2023},
month={Aug},
day={01},
volume={620},
number={7972},
pages={172-180},
issn={1476-4687},
doi={10.1038/s41586-023-06291-2},
url={https://doi.org/10.1038/s41586-023-06291-2}
}
```
### Tasks
* [PubMedQA](https://pubmedqa.github.io/) - 1,000 expert-labeled Q&A pairs where a question and corresponding PubMed abstract as context is given and the a yes/maybe/no answer must be produced. Unlike the rest of the tasks in this suite, PubMedQA is a closed-domain Q&A task.
* [MedQA](https://github.com/jind11/MedQA) - US Medical License Exam (USMLE) questions with 4 or 5 possible answers. Typically, only the 4-option questions are used.
* [MedMCQA](https://medmcqa.github.io/) - 4-option multiple choice questions from Indian medical entrance examinations, >191k total questions.
* [MMLU](https://arxiv.org/abs/2009.03300) - 4-option multiple choice exam questions from a variety of domains. The following 6 domains are utilized here:
* Anatomy
* Clinical Knowledge
* College Medicine
* Medical Genetics
* Professional Medicine
* College Biology
Note that MultiMedQA also includes some short-form and long-form Q&A tasks (LiveQA, MedicationQA, HealthSearchQA). Evaluation on these tasks is usually done by experts and is not typically performed automatically, and therefore is ignored here.
group: multimedqa
task:
- pubmedqa
- medmcqa
- medqa_4options
- task: mmlu_anatomy
task_alias: "anatomy (mmlu)"
group_alias: null
- task: mmlu_clinical_knowledge
task_alias: "clinical_knowledge (mmlu)"
group_alias: null
- task: mmlu_college_medicine
task_alias: "college_medicine (mmlu)"
group_alias: null
- task: mmlu_medical_genetics
task_alias: "medical_genetics (mmlu)"
group_alias: null
- task: mmlu_professional_medicine
task_alias: "professional_medicine (mmlu)"
group_alias: null
- task: mmlu_college_biology
task_alias: "college_biology (mmlu)"
group_alias: null
......@@ -15,4 +15,4 @@ metric_list:
higher_is_better: true
ignore_punctuation: true
metadata:
version: 0.0
version: 1.0
......@@ -18,4 +18,4 @@ metric_list:
aggregation: mean
higher_is_better: True
metadata:
version: 0.0
version: 1.0
......@@ -18,4 +18,4 @@ metric_list:
aggregation: mean
higher_is_better: True
metadata:
version: 0.0
version: 1.0
......@@ -18,4 +18,4 @@ metric_list:
aggregation: mean
higher_is_better: True
metadata:
version: 0.0
version: 1.0
......@@ -18,4 +18,4 @@ metric_list:
aggregation: mean
higher_is_better: True
metadata:
version: 0.0
version: 1.0
......@@ -18,4 +18,4 @@ metric_list:
aggregation: mean
higher_is_better: True
metadata:
version: 0.0
version: 1.0
......@@ -18,4 +18,4 @@ metric_list:
aggregation: mean
higher_is_better: True
metadata:
version: 2.0
version: 3.0
......@@ -19,4 +19,4 @@ metric_list:
aggregation: mean
higher_is_better: true
metadata:
version: 2.0
version: 3.0
......@@ -21,4 +21,4 @@ metric_list:
aggregation: mean
higher_is_better: true
metadata:
version: 2.0
version: 3.0
......@@ -18,4 +18,4 @@ filter_list:
- function: remove_whitespace
- function: take_first
metadata:
version: 1.0
version: 2.0
......@@ -31,4 +31,4 @@ filter_list:
- function: "majority_vote"
- function: "take_first"
metadata:
version: 0.0
version: 2.0
......@@ -5,16 +5,16 @@ dataset_path: gsm8k
dataset_name: main
output_type: generate_until
test_split: test
doc_to_text: "Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\nA: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6.\n\n\
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\nA: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5.\n\n\
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\nA: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.\n\n\
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\nA: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8.\n\n\
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\nA: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9.\n\n\
Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\nA: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.\n\n\
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\nA: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33.\n\n\
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\nA: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8.\n\n\
Q: {{question}}\n\nA:"
doc_to_target: " {{answer.split('### ')[-1].rstrip()}}"
doc_to_text: "Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\nA: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6.\n\n\
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\nA: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5.\n\n\
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\nA: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.\n\n\
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\nA: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8.\n\n\
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\nA: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9.\n\n\
Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\nA: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.\n\n\
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\nA: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33.\n\n\
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\nA: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8.\n\n\
Q: {{question}}\nA:"
doc_to_target: "{{answer.split('####')[-1].strip()}}"
metric_list:
- metric: exact_match
aggregation: mean
......@@ -31,7 +31,6 @@ generation_kwargs:
- "Q:"
- "\n\n"
do_sample: false
temperature: 0.0
repeats: 1
num_fewshot: 0
filter_list:
......@@ -41,4 +40,4 @@ filter_list:
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)."
- function: "take_first"
metadata:
version: 0.0
version: 2.0
......@@ -24,7 +24,6 @@ generation_kwargs:
- "\n\n"
- "Question:"
do_sample: false
temperature: 0.0
repeats: 1
num_fewshot: 5
filter_list:
......@@ -34,4 +33,4 @@ filter_list:
regex_pattern: "#### (\\-?[0-9\\.\\,]+)"
- function: "take_first"
metadata:
version: 1.0
version: 2.0
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment