Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Pai-Megatron-Patch
Commits
5add46aa
Commit
5add46aa
authored
Jan 09, 2025
by
hepj
Browse files
添加Megatron项目
parent
deb8370c
Pipeline
#2199
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/bbq_lite_json.yaml
...lm_eval/tasks/bigbench/multiple_choice/bbq_lite_json.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/bridging_anaphora_resolution_barqa.yaml
...h/multiple_choice/bridging_anaphora_resolution_barqa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/causal_judgement.yaml
...eval/tasks/bigbench/multiple_choice/causal_judgement.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/causal_judgment.yaml
..._eval/tasks/bigbench/multiple_choice/causal_judgment.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cause_and_effect.yaml
...eval/tasks/bigbench/multiple_choice/cause_and_effect.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/checkmate_in_one.yaml
...eval/tasks/bigbench/multiple_choice/checkmate_in_one.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/chess_state_tracking.yaml
.../tasks/bigbench/multiple_choice/chess_state_tracking.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/chinese_remainder_theorem.yaml
...s/bigbench/multiple_choice/chinese_remainder_theorem.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cifar10_classification.yaml
...asks/bigbench/multiple_choice/cifar10_classification.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/code_line_description.yaml
...tasks/bigbench/multiple_choice/code_line_description.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/codenames.yaml
...310/lm_eval/tasks/bigbench/multiple_choice/codenames.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/color.yaml
...-240310/lm_eval/tasks/bigbench/multiple_choice/color.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/common_morpheme.yaml
..._eval/tasks/bigbench/multiple_choice/common_morpheme.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/conceptual_combinations.yaml
...sks/bigbench/multiple_choice/conceptual_combinations.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/conlang_translation.yaml
...l/tasks/bigbench/multiple_choice/conlang_translation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/contextual_parametric_knowledge_conflicts.yaml
...ple_choice/contextual_parametric_knowledge_conflicts.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/crash_blossom.yaml
...lm_eval/tasks/bigbench/multiple_choice/crash_blossom.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/crass_ai.yaml
...0310/lm_eval/tasks/bigbench/multiple_choice/crass_ai.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cryobiology_spanish.yaml
...l/tasks/bigbench/multiple_choice/cryobiology_spanish.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cryptonite.yaml
...10/lm_eval/tasks/bigbench/multiple_choice/cryptonite.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/bbq_lite_json.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
bbq_lite_json_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_bbq_lite_json_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/bridging_anaphora_resolution_barqa.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
bridging_anaphora_resolution_barqa_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_bridging_anaphora_resolution_barqa_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/causal_judgement.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
causal_judgment_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_causal_judgement_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/causal_judgment.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
causal_judgment_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_causal_judgment_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cause_and_effect.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
cause_and_effect_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_cause_and_effect_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/checkmate_in_one.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
checkmate_in_one_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_checkmate_in_one_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/chess_state_tracking.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
chess_state_tracking_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_chess_state_tracking_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/chinese_remainder_theorem.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
chinese_remainder_theorem_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_chinese_remainder_theorem_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cifar10_classification.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
cifar10_classification_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_cifar10_classification_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/code_line_description.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
code_line_description_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_code_line_description_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/codenames.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
codenames_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_codenames_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/color.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
color_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_color_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/common_morpheme.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
common_morpheme_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_common_morpheme_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/conceptual_combinations.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
conceptual_combinations_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_conceptual_combinations_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/conlang_translation.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
conlang_translation_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_conlang_translation_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/contextual_parametric_knowledge_conflicts.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
contextual_parametric_knowledge_conflicts_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_contextual_parametric_knowledge_conflicts_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/crash_blossom.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
crash_blossom_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_crash_blossom_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/crass_ai.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
crass_ai_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_crass_ai_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cryobiology_spanish.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
cryobiology_spanish_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_cryobiology_spanish_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/cryptonite.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
cryptonite_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_cryptonite_multiple_choice
Prev
1
…
31
32
33
34
35
36
37
38
39
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment