Unverified Commit 517aadc4 authored by Lintang Sutawika's avatar Lintang Sutawika Committed by GitHub
Browse files

Group agg rework (#1741)



* add greoup_config arg

* add a group config that allows disabling table for group score and group aggregate in general

* fixed size configuration

* adjust config

* add group config

* adjust mmlu to use group_config

* fixed args input in aggregate_subtask_metrics

* fixed issues related to printing alias of group and updated yaml

* update all mmlu variants to include group_config

* edit format

* modify mmlu tasks

* adjust group to also be a configurable group

* add configurable group

* simplify get_task_list

* adjust group scoring with using ConfigurableGroup

* adjust args

* update mmlu

* update mmlu

* update to work with new group and task configuration

* readd group_agg

* readd files

* move prepare_print_tasks to evaluator_utils

* sort set to False by default, fix predict_only arg

* add version for groups

* reversed task list

* update additional condition when loading a group in a group yaml

* update truthfulqa

* add description regarding tags replacing group

* replace group to tag

* fixed conditional statement

* remove warning

* update loading of task group and newly added tags

* reformat with pre-commit

* fixed info log

* update

* fix bug

* fix bug

* use task id to differentiate tasks

* convert all groups to configurable groups

* use task_id

* reformat

* add task_id for python tasks as well

* add task_id for python tasks as well

* add task_id for python tasks as well

* revert truthfulqa

* revert mmlu tasks

* new mmlu config

* new group config parameter `tag_to_task`

* Update truthfulqa_mc2.yaml

* reformate

* add _process_group_config

* adjust task_id

* add get_subtask_list function to get proper subtask list

* group config to_dict update

* remove tag check

* update mmlu

* fix config passing issues

* add test yaml

* format fix

* add documentation

* corner case for single tag being called

* fix indentation

* formatting

* update all mmlu variants

* Update docs/task_guide.md
Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>

* remove group_alias

* Update docs/task_guide.md
Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>

* remove version for metadata

* Update docs/task_guide.md
Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>

* update mmlu/

* removed " " in make_table

* change how aggregate_metric is loaded

* change how aggregate_metric is loaded

* update aggregate_metric arg

* update format

* update format

* some docs fixes

* add groups for agieval, aexams, aclue

* add more explicit aggregation groups

* add more groupings / tags distinctions

* add more groupings

* more groupings

* add many explicit group configs

* add many explicit group configs

* add more explicit group configs

* add more explicit group configs

* add more error msgs, agg_metric -> agg_metric_list

* some docs updates

* update task_id to be updateable and uses group:task format

* make KMMLU a tag for now

* update docs

* don't duplicate task names

* fix merge conflicts?

* giving this a try

* clean up diff

* switch mmlu variants over to using

* don't use to-be-deprecated group: config field in overview notebook

* Python tasks which subclass ConfigurableTask now run

* update mmlu

* pre-commit format

* fixed sorting for multi-level printing

* move group api to separate file

* fix bbh aggregation filter usage

* track api/group.py

* adjust group and tags loading

* make explicit group configs for leaderboard and other newer tasks

* fix arabicmmlu

* update

* change arabicmmlu template name???

* update group alias

* fix printing bugs

* check table printing is correct ; update tests

* use mmlu_stem to have a group included in print tests

---------
Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
Co-authored-by: default avatarhaileyschoelkopf <hailey@eleuther.ai>
parent 5a7ed3ee
group: kormedmcqa
task:
- kormedmcqa_doctor
- kormedmcqa_nurse
- kormedmcqa_pharm
aggregate_metric_list:
- metric: exact_match
aggregation: mean
weight_by_size: true
metadata:
version: 0.0
group: kormedmcqa
task : kormedmcqa_doctor
dataset_path : sean0042/KorMedMCQA
dataset_name : doctor
......
group: kormedmcqa
task : kormedmcqa_nurse
dataset_path : sean0042/KorMedMCQA
dataset_name : nurse
......
group: kormedmcqa
task : kormedmcqa_pharm
dataset_path : sean0042/KorMedMCQA
dataset_name : pharm
......
group:
tag:
- lambada
task: lambada_openai
dataset_path: EleutherAI/lambada_openai
......
group:
tag:
- lambada
task: lambada_standard
dataset_path: lambada
......
group:
tag:
- lambada_cloze
task: lambada_openai_cloze_yaml
dataset_path: EleutherAI/lambada_openai
......
group:
tag:
- lambada_cloze
task: lambada_standard_cloze_yaml
dataset_path: lambada
......
group:
tag:
- lambada_multilingual
task: lambada_openai_mt_en
dataset_path: EleutherAI/lambada_openai
......
group: leaderboard_bbh
dataset_path: SaylorTwift/bbh
output_type: multiple_choice
test_split: test
......
group: leaderboard_bbh
task:
- leaderboard_bbh_boolean_expressions
- leaderboard_bbh_causal_judgement
- leaderboard_bbh_date_understanding
- leaderboard_bbh_disambiguation_qa
- leaderboard_bbh_formal_fallacies
- leaderboard_bbh_geometric_shapes
- leaderboard_bbh_hyperbaton
- leaderboard_bbh_logical_deduction_five_objects
- leaderboard_bbh_logical_deduction_seven_objects
- leaderboard_bbh_logical_deduction_three_objects
- leaderboard_bbh_movie_recommendation
- leaderboard_bbh_navigate
- leaderboard_bbh_object_counting
- leaderboard_bbh_penguins_in_a_table
- leaderboard_bbh_reasoning_about_colored_objects
- leaderboard_bbh_ruin_names
- leaderboard_bbh_salient_translation_error_detection
- leaderboard_bbh_snarks
- leaderboard_bbh_sports_understanding
- leaderboard_bbh_temporal_sequences
- leaderboard_bbh_tracking_shuffled_objects_five_objects
- leaderboard_bbh_tracking_shuffled_objects_seven_objects
- leaderboard_bbh_tracking_shuffled_objects_three_objects
- leaderboard_bbh_web_of_lies
group: leaderboard_gpqa
task:
- leaderboard_gpqa_diamond
- leaderboard_gpqa_extended
- leaderboard_gpqa_main
dataset_path: Idavidrein/gpqa
group: leaderboard_gpqa
output_type: multiple_choice
process_docs: !function utils.process_docs
training_split: train
......
group: leaderboard_instruction_following
task:
- leaderboard_ifeval
task: leaderboard_ifeval
group: leaderboard_instruction_following
dataset_path: wis-k/instruction-following-eval
dataset_name: null
output_type: generate_until
......
group: leaderboard_math_hard
task:
- leaderboard_math_algebra_hard
- leaderboard_math_counting_and_prob_hard
- leaderboard_math_geometry_hard
- leaderboard_math_intermediate_algebra_hard
- leaderboard_math_num_theory_hard
- leaderboard_math_prealgebra_hard
- leaderboard_math_precalculus_hard
group:
- leaderboard_math_hard
dataset_path: lighteval/MATH-Hard
process_docs: !function utils.process_docs
output_type: generate_until
......
group: leaderboard_musr
task:
- leaderboard_musr_murder_mysteries
- leaderboard_musr_object_placements
- leaderboard_musr_team_allocation
group:
- leaderboard_musr
dataset_path: TAUR-Lab/MuSR
output_type: multiple_choice
doc_to_text: !function utils.doc_to_text
......
group:
tag:
- math_word_problems
task: mathqa
dataset_path: math_qa
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment