Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
a2af2101
Unverified
Commit
a2af2101
authored
Jul 12, 2024
by
Yen-Ting Lin
Committed by
GitHub
Jul 12, 2024
Browse files
Merge branch 'EleutherAI:main' into main
parents
82cb25c1
d5f39bf8
Changes
1000
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
103 additions
and
32 deletions
+103
-32
lm_eval/tasks/bigbench/multiple_choice/temporal_sequences.yaml
...al/tasks/bigbench/multiple_choice/temporal_sequences.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/tense.yaml
lm_eval/tasks/bigbench/multiple_choice/tense.yaml
+0
-4
lm_eval/tasks/bigbench/multiple_choice/timedial.yaml
lm_eval/tasks/bigbench/multiple_choice/timedial.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/topical_chat.yaml
lm_eval/tasks/bigbench/multiple_choice/topical_chat.yaml
+0
-4
lm_eval/tasks/bigbench/multiple_choice/tracking_shuffled_objects.yaml
...s/bigbench/multiple_choice/tracking_shuffled_objects.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/understanding_fables.yaml
.../tasks/bigbench/multiple_choice/understanding_fables.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/undo_permutation.yaml
lm_eval/tasks/bigbench/multiple_choice/undo_permutation.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/unit_conversion.yaml
lm_eval/tasks/bigbench/multiple_choice/unit_conversion.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/unit_interpretation.yaml
...l/tasks/bigbench/multiple_choice/unit_interpretation.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/unnatural_in_context_learning.yaml
...gbench/multiple_choice/unnatural_in_context_learning.yaml
+0
-4
lm_eval/tasks/bigbench/multiple_choice/vitaminc_fact_verification.yaml
.../bigbench/multiple_choice/vitaminc_fact_verification.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/what_is_the_tao.yaml
lm_eval/tasks/bigbench/multiple_choice/what_is_the_tao.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/which_wiki_edit.yaml
lm_eval/tasks/bigbench/multiple_choice/which_wiki_edit.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/winowhy.yaml
lm_eval/tasks/bigbench/multiple_choice/winowhy.yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice/word_sorting.yaml
lm_eval/tasks/bigbench/multiple_choice/word_sorting.yaml
+0
-4
lm_eval/tasks/bigbench/multiple_choice/word_unscrambling.yaml
...val/tasks/bigbench/multiple_choice/word_unscrambling.yaml
+0
-4
lm_eval/tasks/bigbench/multiple_choice_template_a_yaml
lm_eval/tasks/bigbench/multiple_choice_template_a_yaml
+1
-1
lm_eval/tasks/bigbench/multiple_choice_template_b_yaml
lm_eval/tasks/bigbench/multiple_choice_template_b_yaml
+15
-0
lm_eval/tasks/bigbench/push_bigbench_dataset.py
lm_eval/tasks/bigbench/push_bigbench_dataset.py
+1
-0
lm_eval/tasks/blimp/_blimp.yaml
lm_eval/tasks/blimp/_blimp.yaml
+75
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
lm_eval/tasks/bigbench/multiple_choice/temporal_sequences.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
temporal_sequences_zero_shot
dataset_name
:
temporal_sequences_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_temporal_sequences_multiple_choice
task
:
bigbench_temporal_sequences_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/tense.yaml
deleted
100644 → 0
View file @
82cb25c1
# Generated by utils.py
dataset_name
:
tense_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_tense_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/timedial.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
timedial_zero_shot
dataset_name
:
timedial_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_timedial_multiple_choice
task
:
bigbench_timedial_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/topical_chat.yaml
deleted
100644 → 0
View file @
82cb25c1
# Generated by utils.py
dataset_name
:
topical_chat_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_topical_chat_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/tracking_shuffled_objects.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
tracking_shuffled_objects_zero_shot
dataset_name
:
tracking_shuffled_objects_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_tracking_shuffled_objects_multiple_choice
task
:
bigbench_tracking_shuffled_objects_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/understanding_fables.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
understanding_fables_zero_shot
dataset_name
:
understanding_fables_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_understanding_fables_multiple_choice
task
:
bigbench_understanding_fables_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/undo_permutation.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
undo_permutation_zero_shot
dataset_name
:
undo_permutation_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_undo_permutation_multiple_choice
task
:
bigbench_undo_permutation_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/unit_conversion.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
unit_conversion_zero_shot
dataset_name
:
unit_conversion_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_unit_conversion_multiple_choice
task
:
bigbench_unit_conversion_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/unit_interpretation.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
unit_interpretation_zero_shot
dataset_name
:
unit_interpretation_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_unit_interpretation_multiple_choice
task
:
bigbench_unit_interpretation_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/unnatural_in_context_learning.yaml
deleted
100644 → 0
View file @
82cb25c1
# Generated by utils.py
dataset_name
:
unnatural_in_context_learning_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_unnatural_in_context_learning_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/vitaminc_fact_verification.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
vitaminc_fact_verification_zero_shot
dataset_name
:
vitaminc_fact_verification_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_vitaminc_fact_verification_multiple_choice
task
:
bigbench_vitaminc_fact_verification_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/what_is_the_tao.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
what_is_the_tao_zero_shot
dataset_name
:
what_is_the_tao_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_what_is_the_tao_multiple_choice
task
:
bigbench_what_is_the_tao_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/which_wiki_edit.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
which_wiki_edit_zero_shot
dataset_name
:
which_wiki_edit_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_which_wiki_edit_multiple_choice
task
:
bigbench_which_wiki_edit_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/winowhy.yaml
View file @
a2af2101
# Generated by utils.py
# Generated by utils.py
dataset_name
:
winowhy_zero_shot
dataset_name
:
winowhy_zero_shot
include
:
../multiple_choice_template_yaml
include
:
../multiple_choice_template_
a_
yaml
task
:
bigbench_winowhy_multiple_choice
task
:
bigbench_winowhy_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/word_sorting.yaml
deleted
100644 → 0
View file @
82cb25c1
# Generated by utils.py
dataset_name
:
word_sorting_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_word_sorting_multiple_choice
lm_eval/tasks/bigbench/multiple_choice/word_unscrambling.yaml
deleted
100644 → 0
View file @
82cb25c1
# Generated by utils.py
dataset_name
:
word_unscrambling_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_word_unscrambling_multiple_choice
lm_eval/tasks/bigbench/multiple_choice_template_yaml
→
lm_eval/tasks/bigbench/multiple_choice_template_
a_
yaml
View file @
a2af2101
...
@@ -12,4 +12,4 @@ metric_list:
...
@@ -12,4 +12,4 @@ metric_list:
- metric: acc
- metric: acc
# TODO: brier score and other metrics
# TODO: brier score and other metrics
metadata:
metadata:
version:
0
.0
version:
1
.0
lm_eval/tasks/bigbench/multiple_choice_template_b_yaml
0 → 100644
View file @
a2af2101
group: bigbench_multiple_choice
dataset_path: hails/bigbench
dataset_kwargs:
# num_shots: 0 # TODO: num of shots for `bigbench` HF dataset should be controlled through this, not through the typical methods
# subtask_name: null
output_type: multiple_choice
test_split: default
doc_to_text: inputs
doc_to_target: "{{multiple_choice_scores.index(1)}}"
doc_to_choice: "{{multiple_choice_targets}}"
metric_list:
- metric: acc
# TODO: brier score and other metrics
metadata:
version: 1.0
lm_eval/tasks/bigbench/push_bigbench_dataset.py
View file @
a2af2101
...
@@ -8,6 +8,7 @@ Requires the installation of
...
@@ -8,6 +8,7 @@ Requires the installation of
`pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"`
`pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"`
and is included so that the bigbench dependency can be avoided.
and is included so that the bigbench dependency can be avoided.
"""
"""
import
bigbench.api.util
as
bb_utils
import
bigbench.api.util
as
bb_utils
import
datasets
import
datasets
from
tqdm
import
tqdm
from
tqdm
import
tqdm
...
...
lm_eval/tasks/blimp/_blimp.yaml
0 → 100644
View file @
a2af2101
group
:
blimp
task
:
-
"
blimp_adjunct_island"
-
"
blimp_anaphor_gender_agreement"
-
"
blimp_anaphor_number_agreement"
-
"
blimp_animate_subject_passive"
-
"
blimp_animate_subject_trans"
-
"
blimp_causative"
-
"
blimp_complex_NP_island"
-
"
blimp_coordinate_structure_constraint_complex_left_branch"
-
"
blimp_coordinate_structure_constraint_object_extraction"
-
"
blimp_determiner_noun_agreement_1"
-
"
blimp_determiner_noun_agreement_2"
-
"
blimp_determiner_noun_agreement_irregular_1"
-
"
blimp_determiner_noun_agreement_irregular_2"
-
"
blimp_determiner_noun_agreement_with_adj_2"
-
"
blimp_determiner_noun_agreement_with_adj_irregular_1"
-
"
blimp_determiner_noun_agreement_with_adj_irregular_2"
-
"
blimp_determiner_noun_agreement_with_adjective_1"
-
"
blimp_distractor_agreement_relational_noun"
-
"
blimp_distractor_agreement_relative_clause"
-
"
blimp_drop_argument"
-
"
blimp_ellipsis_n_bar_1"
-
"
blimp_ellipsis_n_bar_2"
-
"
blimp_existential_there_object_raising"
-
"
blimp_existential_there_quantifiers_1"
-
"
blimp_existential_there_quantifiers_2"
-
"
blimp_existential_there_subject_raising"
-
"
blimp_expletive_it_object_raising"
-
"
blimp_inchoative"
-
"
blimp_intransitive"
-
"
blimp_irregular_past_participle_adjectives"
-
"
blimp_irregular_past_participle_verbs"
-
"
blimp_irregular_plural_subject_verb_agreement_1"
-
"
blimp_irregular_plural_subject_verb_agreement_2"
-
"
blimp_left_branch_island_echo_question"
-
"
blimp_left_branch_island_simple_question"
-
"
blimp_matrix_question_npi_licensor_present"
-
"
blimp_npi_present_1"
-
"
blimp_npi_present_2"
-
"
blimp_only_npi_licensor_present"
-
"
blimp_only_npi_scope"
-
"
blimp_passive_1"
-
"
blimp_passive_2"
-
"
blimp_principle_A_c_command"
-
"
blimp_principle_A_case_1"
-
"
blimp_principle_A_case_2"
-
"
blimp_principle_A_domain_1"
-
"
blimp_principle_A_domain_2"
-
"
blimp_principle_A_domain_3"
-
"
blimp_principle_A_reconstruction"
-
"
blimp_regular_plural_subject_verb_agreement_1"
-
"
blimp_regular_plural_subject_verb_agreement_2"
-
"
blimp_sentential_negation_npi_licensor_present"
-
"
blimp_sentential_negation_npi_scope"
-
"
blimp_sentential_subject_island"
-
"
blimp_superlative_quantifiers_1"
-
"
blimp_superlative_quantifiers_2"
-
"
blimp_tough_vs_raising_1"
-
"
blimp_tough_vs_raising_2"
-
"
blimp_transitive"
-
"
blimp_wh_island"
-
"
blimp_wh_questions_object_gap"
-
"
blimp_wh_questions_subject_gap"
-
"
blimp_wh_questions_subject_gap_long_distance"
-
"
blimp_wh_vs_that_no_gap"
-
"
blimp_wh_vs_that_no_gap_long_distance"
-
"
blimp_wh_vs_that_with_gap"
-
"
blimp_wh_vs_that_with_gap_long_distance"
aggregate_metric_list
:
-
metric
:
acc
aggregation
:
mean
weight_by_size
:
False
metadata
:
version
:
2.0
Prev
1
…
18
19
20
21
22
23
24
25
26
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment