Commit 5d3bf2e7 authored by lintangsutawika's avatar lintangsutawika
Browse files

Merge branch 'big-refactor' of...

Merge branch 'big-refactor' of https://github.com/EleutherAI/lm-evaluation-harness into openai_completions
parents f66730c4 bf26d979
"dataset_name": "word_sorting"
"description": "Sort a list of words.\n\n"
"doc_to_text": " Sort the following words alphabetically: List: oven costume counterpart\nA: Let's think step by step.\nThe first letter: \"oven\": \"o\" (15). \"costume\": \"c\" (3). \"counterpart\": \"c\" (3). We now have: (3) [\"costume\" ? \"counterpart\"] < (15) \"oven\". Now let's sort this subpart [\"costume\" ? \"counterpart\"] by looking at their second letters.\nThe second letter: \"costume\": \"o\" (15). \"counterpart\": \"o\" (15). We now have: (15) [\"costume\" ? \"counterpart\"]. Now let's sort this subpart [\"costume\" ? \"counterpart\"] by looking at their third letters.\nThe third letter: \"costume\": \"s\" (19). \"counterpart\": \"u\" (21). We now have: (19) \"costume\" < (21) \"counterpart\". Hence, we have [\"costume\" < \"counterpart\"] < \"oven\". So the answer is costume counterpart oven. Sort the following words alphabetically: List: hypochlorite ponderosa phone credulity\nA: Let's think step by step.\nThe first letter: \"hypochlorite\": \"h\" (8). \"ponderosa\": \"p\" (16). \"phone\": \"p\" (16). \"credulity\": \"c\" (3). We now have: (3) \"credulity\" < (8) \"hypochlorite\" < (16) [\"ponderosa\" ? \"phone\"]. Now let's sort this subpart [\"ponderosa\" ? \"phone\"] by looking at their second letters.\nThe second letter: \"ponderosa\": \"o\" (15). \"phone\": \"h\" (8). We now have: (8) \"phone\" < (15) \"ponderosa\". Hence, we have \"credulity\" < \"hypochlorite\" < [\"phone\" <\"ponderosa\"]. So the answer is credulity hypochlorite phone ponderosa. Sort the following words alphabetically: List: newt arson parthia seismography mugho aspect census\nA: Let's think step by step.\nThe first letter: \"newt\": \"n\" (14). \"arson\": \"a\" (1). \"parthia\": \"p\" (16). \"seismography\": \"s\" (19). \"mugho\": \"m\" (13). \"aspect\": \"a\" (1). \"census\": \"c\" (3). We now have: (1) [\"arson\" ? \"aspect\"] < (3) \"census\" < (13) \"mugho\" < (14) \"newt\" < (16) \"parthia\" < (19) \"seismography\". Now let's sort this subpart [\"arson\" ? \"aspect\"] by looking at their second letters.\nThe second letter: \"arson\": \"r\" (18). \"aspect\": \"s\" (19). We now have: (18) \"arson\" < (19) \"aspect\". Hence, we have [\"arson\" < \"aspect\"] < \"census\" < \"mugho\" < \"newt\" < \"parthia\" < \"seismography\". So the answer is arson aspect census mugho newt parthia seismography.Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_fewshot_template_yaml"
"task": "bbh_flan_cot_fewshot_word_sorting"
"include": "_cot_fewshot_template_yaml"
"task": "bbh_cot_fewshot_word_sorting"
group: bbh_flan_cot_zeroshot
group: bbh_cot_zeroshot
dataset_path: lukaemon/bbh
output_type: generate_until
test_split: test
......@@ -12,6 +12,8 @@ metric_list:
generation_kwargs:
until:
- "</s>"
- "Q"
- "\n\n"
do_sample: false
temperature: 0.0
filter_list:
......
"dataset_name": "boolean_expressions"
"description": "Evaluate the result of a random Boolean expression.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_boolean_expressions"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_boolean_expressions"
"dataset_name": "causal_judgement"
"description": "Answer questions about causal attribution.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_causal_judgement"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_causal_judgement"
"dataset_name": "date_understanding"
"description": "Infer the date from context.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_date_understanding"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_date_understanding"
"dataset_name": "disambiguation_qa"
"description": "Clarify the meaning of sentences with ambiguous pronouns.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_disambiguation_qa"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_disambiguation_qa"
"dataset_name": "dyck_languages"
"description": "Correctly close a Dyck-n word.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_dyck_languages"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_dyck_languages"
"dataset_name": "formal_fallacies"
"description": "Distinguish deductively valid arguments from formal fallacies.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_formal_fallacies"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_formal_fallacies"
"dataset_name": "geometric_shapes"
"description": "Name geometric shapes from their SVG paths.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_geometric_shapes"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_geometric_shapes"
"dataset_name": "hyperbaton"
"description": "Order adjectives correctly in English sentences.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_hyperbaton"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_hyperbaton"
"dataset_name": "logical_deduction_five_objects"
"description": "A logical deduction task which requires deducing the order of a sequence of objects.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_logical_deduction_five_objects"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_logical_deduction_five_objects"
"dataset_name": "logical_deduction_seven_objects"
"description": "A logical deduction task which requires deducing the order of a sequence of objects.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_logical_deduction_seven_objects"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_logical_deduction_seven_objects"
"dataset_name": "logical_deduction_three_objects"
"description": "A logical deduction task which requires deducing the order of a sequence of objects.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_logical_deduction_three_objects"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_logical_deduction_three_objects"
"dataset_name": "movie_recommendation"
"description": "Recommend movies similar to the given list of movies.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_movie_recommendation"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_movie_recommendation"
"dataset_name": "multistep_arithmetic_two"
"description": "Solve multi-step arithmetic problems.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_multistep_arithmetic_two"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_multistep_arithmetic_two"
"dataset_name": "navigate"
"description": "Given a series of navigation instructions, determine whether one would end up back at the starting point.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_navigate"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_navigate"
"dataset_name": "object_counting"
"description": "Questions that involve enumerating objects and asking the model to count them.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_object_counting"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_object_counting"
"dataset_name": "penguins_in_a_table"
"description": "Answer questions about a table of penguins and their attributes.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_penguins_in_a_table"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_penguins_in_a_table"
"dataset_name": "reasoning_about_colored_objects"
"description": "Answer extremely simple questions about the colors of objects on a surface.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_reasoning_about_colored_objects"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_reasoning_about_colored_objects"
"dataset_name": "ruin_names"
"description": "Select the humorous edit that 'ruins' the input movie or musical artist name.\n\n"
"doc_to_text": "Q: {{input}}\nA: Let's think step by step.\n"
"include": "_flan_cot_zeroshot_template_yaml"
"task": "bbh_flan_cot_zeroshot_ruin_names"
"include": "_cot_zeroshot_template_yaml"
"task": "bbh_cot_zeroshot_ruin_names"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment