Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Pai-Megatron-Patch
Commits
5add46aa
Commit
5add46aa
authored
Jan 09, 2025
by
hepj
Browse files
添加Megatron项目
parent
deb8370c
Pipeline
#2199
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/code_line_description.yaml
.../tasks/bigbench/generate_until/code_line_description.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/codenames.yaml
...0310/lm_eval/tasks/bigbench/generate_until/codenames.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/color.yaml
...s-240310/lm_eval/tasks/bigbench/generate_until/color.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/common_morpheme.yaml
...m_eval/tasks/bigbench/generate_until/common_morpheme.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conceptual_combinations.yaml
...asks/bigbench/generate_until/conceptual_combinations.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conlang_translation.yaml
...al/tasks/bigbench/generate_until/conlang_translation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/contextual_parametric_knowledge_conflicts.yaml
...rate_until/contextual_parametric_knowledge_conflicts.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml
.../lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml
...40310/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryobiology_spanish.yaml
...al/tasks/bigbench/generate_until/cryobiology_spanish.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryptonite.yaml
...310/lm_eval/tasks/bigbench/generate_until/cryptonite.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cs_algorithms.yaml
.../lm_eval/tasks/bigbench/generate_until/cs_algorithms.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/dark_humor_detection.yaml
...l/tasks/bigbench/generate_until/dark_humor_detection.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/date_understanding.yaml
...val/tasks/bigbench/generate_until/date_understanding.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/disambiguation_qa.yaml
...eval/tasks/bigbench/generate_until/disambiguation_qa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/discourse_marker_prediction.yaml
.../bigbench/generate_until/discourse_marker_prediction.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/disfl_qa.yaml
...40310/lm_eval/tasks/bigbench/generate_until/disfl_qa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/dyck_languages.yaml
...lm_eval/tasks/bigbench/generate_until/dyck_languages.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/elementary_math_qa.yaml
...val/tasks/bigbench/generate_until/elementary_math_qa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/emoji_movie.yaml
...10/lm_eval/tasks/bigbench/generate_until/emoji_movie.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/code_line_description.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
code_line_description_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_code_line_description_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/codenames.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
codenames_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_codenames_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/color.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
color_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_color_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/common_morpheme.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
common_morpheme_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_common_morpheme_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conceptual_combinations.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
conceptual_combinations_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_conceptual_combinations_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conlang_translation.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
conlang_translation_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_conlang_translation_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/contextual_parametric_knowledge_conflicts.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
contextual_parametric_knowledge_conflicts_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_contextual_parametric_knowledge_conflicts_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
crash_blossom_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_crash_blossom_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
crass_ai_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_crass_ai_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryobiology_spanish.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
cryobiology_spanish_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cryobiology_spanish_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryptonite.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
cryptonite_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cryptonite_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cs_algorithms.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
cs_algorithms_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cs_algorithms_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/dark_humor_detection.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
dark_humor_detection_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_dark_humor_detection_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/date_understanding.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
date_understanding_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_date_understanding_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/disambiguation_qa.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
disambiguation_qa_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_disambiguation_qa_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/discourse_marker_prediction.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
discourse_marker_prediction_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_discourse_marker_prediction_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/disfl_qa.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
disfl_qa_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_disfl_qa_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/dyck_languages.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
dyck_languages_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_dyck_languages_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/elementary_math_qa.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
elementary_math_qa_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_elementary_math_qa_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/emoji_movie.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
emoji_movie_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_emoji_movie_generate_until
Prev
1
…
23
24
25
26
27
28
29
30
31
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment