Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Pai-Megatron-Patch
Commits
5add46aa
Commit
5add46aa
authored
Jan 09, 2025
by
hepj
Browse files
添加Megatron项目
parent
deb8370c
Pipeline
#2199
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/international_phonetic_alphabet_transliterate.yaml
...choice/international_phonetic_alphabet_transliterate.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/intersect_geometry.yaml
...al/tasks/bigbench/multiple_choice/intersect_geometry.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/irony_identification.yaml
.../tasks/bigbench/multiple_choice/irony_identification.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/kanji_ascii.yaml
...0/lm_eval/tasks/bigbench/multiple_choice/kanji_ascii.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/kannada.yaml
...40310/lm_eval/tasks/bigbench/multiple_choice/kannada.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/key_value_maps.yaml
...m_eval/tasks/bigbench/multiple_choice/key_value_maps.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/known_unknowns.yaml
...m_eval/tasks/bigbench/multiple_choice/known_unknowns.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/language_games.yaml
...m_eval/tasks/bigbench/multiple_choice/language_games.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/language_identification.yaml
...sks/bigbench/multiple_choice/language_identification.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/linguistic_mappings.yaml
...l/tasks/bigbench/multiple_choice/linguistic_mappings.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/linguistics_puzzles.yaml
...l/tasks/bigbench/multiple_choice/linguistics_puzzles.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/list_functions.yaml
...m_eval/tasks/bigbench/multiple_choice/list_functions.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logic_grid_puzzle.yaml
...val/tasks/bigbench/multiple_choice/logic_grid_puzzle.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_args.yaml
.../lm_eval/tasks/bigbench/multiple_choice/logical_args.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_deduction.yaml
...val/tasks/bigbench/multiple_choice/logical_deduction.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_fallacy_detection.yaml
...s/bigbench/multiple_choice/logical_fallacy_detection.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_sequence.yaml
...eval/tasks/bigbench/multiple_choice/logical_sequence.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mathematical_induction.yaml
...asks/bigbench/multiple_choice/mathematical_induction.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/matrixshapes.yaml
.../lm_eval/tasks/bigbench/multiple_choice/matrixshapes.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/metaphor_boolean.yaml
...eval/tasks/bigbench/multiple_choice/metaphor_boolean.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/international_phonetic_alphabet_transliterate.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
international_phonetic_alphabet_transliterate_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_international_phonetic_alphabet_transliterate_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/intersect_geometry.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
intersect_geometry_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_intersect_geometry_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/irony_identification.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
irony_identification_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_irony_identification_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/kanji_ascii.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
kanji_ascii_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_kanji_ascii_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/kannada.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
kannada_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_kannada_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/key_value_maps.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
key_value_maps_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_key_value_maps_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/known_unknowns.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
known_unknowns_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_known_unknowns_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/language_games.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
language_games_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_language_games_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/language_identification.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
language_identification_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_language_identification_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/linguistic_mappings.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
linguistic_mappings_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_linguistic_mappings_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/linguistics_puzzles.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
linguistics_puzzles_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_linguistics_puzzles_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/list_functions.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
list_functions_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_list_functions_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logic_grid_puzzle.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
logic_grid_puzzle_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logic_grid_puzzle_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_args.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
logical_args_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_args_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_deduction.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
logical_deduction_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_deduction_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_fallacy_detection.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
logical_fallacy_detection_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_fallacy_detection_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_sequence.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
logical_sequence_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_sequence_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mathematical_induction.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
mathematical_induction_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_mathematical_induction_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/matrixshapes.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
matrixshapes_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_matrixshapes_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/metaphor_boolean.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
metaphor_boolean_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_metaphor_boolean_multiple_choice
Prev
1
…
34
35
36
37
38
39
40
41
42
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment