Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Pai-Megatron-Patch
Commits
5add46aa
Commit
5add46aa
authored
Jan 09, 2025
by
hepj
Browse files
添加Megatron项目
parent
deb8370c
Pipeline
#2199
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml
...al/tasks/bigbench/generate_until/mult_data_wrangling.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/multiemo.yaml
...40310/lm_eval/tasks/bigbench/generate_until/multiemo.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/natural_instructions.yaml
...l/tasks/bigbench/generate_until/natural_instructions.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/navigate.yaml
...40310/lm_eval/tasks/bigbench/generate_until/navigate.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/nonsense_words_grammar.yaml
...tasks/bigbench/generate_until/nonsense_words_grammar.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/novel_concepts.yaml
...lm_eval/tasks/bigbench/generate_until/novel_concepts.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/object_counting.yaml
...m_eval/tasks/bigbench/generate_until/object_counting.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/odd_one_out.yaml
...10/lm_eval/tasks/bigbench/generate_until/odd_one_out.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/operators.yaml
...0310/lm_eval/tasks/bigbench/generate_until/operators.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/paragraph_segmentation.yaml
...tasks/bigbench/generate_until/paragraph_segmentation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_qa.yaml
...10/lm_eval/tasks/bigbench/generate_until/parsinlu_qa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_reading_comprehension.yaml
...gbench/generate_until/parsinlu_reading_comprehension.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/penguins_in_a_table.yaml
...al/tasks/bigbench/generate_until/penguins_in_a_table.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/periodic_elements.yaml
...eval/tasks/bigbench/generate_until/periodic_elements.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/persian_idioms.yaml
...lm_eval/tasks/bigbench/generate_until/persian_idioms.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/phrase_relatedness.yaml
...val/tasks/bigbench/generate_until/phrase_relatedness.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/physical_intuition.yaml
...val/tasks/bigbench/generate_until/physical_intuition.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/physics.yaml
...240310/lm_eval/tasks/bigbench/generate_until/physics.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/physics_questions.yaml
...eval/tasks/bigbench/generate_until/physics_questions.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/play_dialog_same_or_different.yaml
...igbench/generate_until/play_dialog_same_or_different.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
mult_data_wrangling_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_mult_data_wrangling_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/multiemo.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
multiemo_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_multiemo_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/natural_instructions.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
natural_instructions_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_natural_instructions_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/navigate.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
navigate_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_navigate_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/nonsense_words_grammar.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
nonsense_words_grammar_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_nonsense_words_grammar_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/novel_concepts.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
novel_concepts_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_novel_concepts_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/object_counting.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
object_counting_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_object_counting_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/odd_one_out.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
odd_one_out_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_odd_one_out_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/operators.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
operators_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_operators_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/paragraph_segmentation.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
paragraph_segmentation_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_paragraph_segmentation_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_qa.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
parsinlu_qa_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_parsinlu_qa_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/parsinlu_reading_comprehension.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
parsinlu_reading_comprehension_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_parsinlu_reading_comprehension_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/penguins_in_a_table.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
penguins_in_a_table_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_penguins_in_a_table_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/periodic_elements.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
periodic_elements_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_periodic_elements_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/persian_idioms.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
persian_idioms_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_persian_idioms_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/phrase_relatedness.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
phrase_relatedness_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_phrase_relatedness_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/physical_intuition.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
physical_intuition_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_physical_intuition_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/physics.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
physics_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_physics_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/physics_questions.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
physics_questions_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_physics_questions_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/play_dialog_same_or_different.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
play_dialog_same_or_different_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_play_dialog_same_or_different_generate_until
Prev
1
…
27
28
29
30
31
32
33
34
35
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment