Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Pai-Megatron-Patch
Commits
5add46aa
Commit
5add46aa
authored
Jan 09, 2025
by
hepj
Browse files
添加Megatron项目
parent
deb8370c
Pipeline
#2199
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
91 additions
and
0 deletions
+91
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/swahili_english_proverbs.yaml
...ks/bigbench/multiple_choice/swahili_english_proverbs.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/swedish_to_german_proverbs.yaml
.../bigbench/multiple_choice/swedish_to_german_proverbs.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/symbol_interpretation.yaml
...tasks/bigbench/multiple_choice/symbol_interpretation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/temporal_sequences.yaml
...al/tasks/bigbench/multiple_choice/temporal_sequences.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/tense.yaml
...-240310/lm_eval/tasks/bigbench/multiple_choice/tense.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/timedial.yaml
...0310/lm_eval/tasks/bigbench/multiple_choice/timedial.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/topical_chat.yaml
.../lm_eval/tasks/bigbench/multiple_choice/topical_chat.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/tracking_shuffled_objects.yaml
...s/bigbench/multiple_choice/tracking_shuffled_objects.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/understanding_fables.yaml
.../tasks/bigbench/multiple_choice/understanding_fables.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/undo_permutation.yaml
...eval/tasks/bigbench/multiple_choice/undo_permutation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/unit_conversion.yaml
..._eval/tasks/bigbench/multiple_choice/unit_conversion.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/unit_interpretation.yaml
...l/tasks/bigbench/multiple_choice/unit_interpretation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/unnatural_in_context_learning.yaml
...gbench/multiple_choice/unnatural_in_context_learning.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/vitaminc_fact_verification.yaml
.../bigbench/multiple_choice/vitaminc_fact_verification.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/what_is_the_tao.yaml
..._eval/tasks/bigbench/multiple_choice/what_is_the_tao.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/which_wiki_edit.yaml
..._eval/tasks/bigbench/multiple_choice/which_wiki_edit.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/winowhy.yaml
...40310/lm_eval/tasks/bigbench/multiple_choice/winowhy.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/word_sorting.yaml
.../lm_eval/tasks/bigbench/multiple_choice/word_sorting.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/word_unscrambling.yaml
...val/tasks/bigbench/multiple_choice/word_unscrambling.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice_template_yaml
...0310/lm_eval/tasks/bigbench/multiple_choice_template_yaml
+15
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/swahili_english_proverbs.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
swahili_english_proverbs_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_swahili_english_proverbs_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/swedish_to_german_proverbs.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
swedish_to_german_proverbs_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_swedish_to_german_proverbs_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/symbol_interpretation.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
symbol_interpretation_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_symbol_interpretation_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/temporal_sequences.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
temporal_sequences_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_temporal_sequences_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/tense.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
tense_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_tense_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/timedial.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
timedial_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_timedial_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/topical_chat.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
topical_chat_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_topical_chat_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/tracking_shuffled_objects.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
tracking_shuffled_objects_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_tracking_shuffled_objects_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/understanding_fables.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
understanding_fables_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_understanding_fables_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/undo_permutation.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
undo_permutation_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_undo_permutation_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/unit_conversion.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
unit_conversion_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_unit_conversion_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/unit_interpretation.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
unit_interpretation_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_unit_interpretation_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/unnatural_in_context_learning.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
unnatural_in_context_learning_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_unnatural_in_context_learning_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/vitaminc_fact_verification.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
vitaminc_fact_verification_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_vitaminc_fact_verification_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/what_is_the_tao.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
what_is_the_tao_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_what_is_the_tao_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/which_wiki_edit.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
which_wiki_edit_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_which_wiki_edit_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/winowhy.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
winowhy_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_winowhy_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/word_sorting.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
word_sorting_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_word_sorting_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/word_unscrambling.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
word_unscrambling_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_word_unscrambling_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice_template_yaml
0 → 100644
View file @
5add46aa
group: bigbench_multiple_choice
dataset_path: hails/bigbench
dataset_kwargs:
# num_shots: 0 # TODO: num of shots for `bigbench` HF dataset should be controlled through this, not through the typical methods
# subtask_name: null
output_type: multiple_choice
test_split: default
doc_to_text: inputs
doc_to_target: "{{multiple_choice_targets.index(targets[0])}}"
doc_to_choice: "{{multiple_choice_targets}}"
metric_list:
- metric: acc
# TODO: brier score and other metrics
metadata:
version: 0.0
Prev
1
…
38
39
40
41
42
43
44
45
46
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment