Commit 337492ad authored by lintangsutawika's avatar lintangsutawika
Browse files

Merge branch 'big-refactor' of...

Merge branch 'big-refactor' of https://github.com/EleutherAI/lm-evaluation-harness into flan-benchmark
parents 3d2ee4d4 4824a832
# Generated by utils.py
dataset_name: word_sorting_zero_shot
include: ../greedy_until_template_yaml
task: bigbench_word_sorting_greedy_until
# Generated by utils.py
dataset_name: word_unscrambling_zero_shot
include: ../greedy_until_template_yaml
task: bigbench_word_unscrambling_greedy_until
group: bigbench
dataset_path: bigbench # will switch to `hails/bigbench` when all tasks are pushed
output_type: greedy_until
dataset_kwargs:
# num_shots: 0 # TODO: num of shots for `bigbench` HF dataset should be controlled through this, not through the typical methods
# subtask_name: null
test_split: default
doc_to_text: inputs
doc_to_target: "{{targets[0]}}"
generation_kwargs:
max_length: 128
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_punctuation: true
# Generated by utils.py
dataset_name: abstract_narrative_understanding_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_abstract_narrative_understanding_multiple_choice
# Generated by utils.py
dataset_name: anachronisms_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_anachronisms_multiple_choice
# Generated by utils.py
dataset_name: analogical_similarity_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_analogical_similarity_multiple_choice
# Generated by utils.py
dataset_name: analytic_entailment_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_analytic_entailment_multiple_choice
# Generated by utils.py
dataset_name: arithmetic_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_arithmetic_multiple_choice
# Generated by utils.py
dataset_name: ascii_word_recognition_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_ascii_word_recognition_multiple_choice
# Generated by utils.py
dataset_name: authorship_verification_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_authorship_verification_multiple_choice
# Generated by utils.py
dataset_name: auto_categorization_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_auto_categorization_multiple_choice
# Generated by utils.py
dataset_name: auto_debugging_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_auto_debugging_multiple_choice
# Generated by utils.py
dataset_name: bbq_lite_json_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_bbq_lite_json_multiple_choice
# Generated by utils.py
dataset_name: bridging_anaphora_resolution_barqa_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_bridging_anaphora_resolution_barqa_multiple_choice
# Generated by utils.py
dataset_name: causal_judgment_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_causal_judgment_multiple_choice
# Generated by utils.py
dataset_name: cause_and_effect_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_cause_and_effect_multiple_choice
# Generated by utils.py
dataset_name: checkmate_in_one_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_checkmate_in_one_multiple_choice
# Generated by utils.py
dataset_name: chess_state_tracking_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_chess_state_tracking_multiple_choice
# Generated by utils.py
dataset_name: chinese_remainder_theorem_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_chinese_remainder_theorem_multiple_choice
# Generated by utils.py
dataset_name: cifar10_classification_zero_shot
include: ../multiple_choice_template_yaml
task: bigbench_cifar10_classification_multiple_choice
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment