Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Pai-Megatron-Patch
Commits
5add46aa
Commit
5add46aa
authored
Jan 09, 2025
by
hepj
Browse files
添加Megatron项目
parent
deb8370c
Pipeline
#2199
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/polish_sequence_labeling.yaml
...sks/bigbench/generate_until/polish_sequence_labeling.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/presuppositions_as_nli.yaml
...tasks/bigbench/generate_until/presuppositions_as_nli.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/qa_wikidata.yaml
...10/lm_eval/tasks/bigbench/generate_until/qa_wikidata.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/question_selection.yaml
...val/tasks/bigbench/generate_until/question_selection.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/real_or_fake_text.yaml
...eval/tasks/bigbench/generate_until/real_or_fake_text.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/reasoning_about_colored_objects.yaml
...bench/generate_until/reasoning_about_colored_objects.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/repeat_copy_logic.yaml
...eval/tasks/bigbench/generate_until/repeat_copy_logic.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/rephrase.yaml
...40310/lm_eval/tasks/bigbench/generate_until/rephrase.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/riddle_sense.yaml
...0/lm_eval/tasks/bigbench/generate_until/riddle_sense.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/ruin_names.yaml
...310/lm_eval/tasks/bigbench/generate_until/ruin_names.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/salient_translation_error_detection.yaml
...h/generate_until/salient_translation_error_detection.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/scientific_press_release.yaml
...sks/bigbench/generate_until/scientific_press_release.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/semantic_parsing_in_context_sparc.yaml
...nch/generate_until/semantic_parsing_in_context_sparc.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/semantic_parsing_spider.yaml
...asks/bigbench/generate_until/semantic_parsing_spider.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/sentence_ambiguity.yaml
...val/tasks/bigbench/generate_until/sentence_ambiguity.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/similarities_abstraction.yaml
...sks/bigbench/generate_until/similarities_abstraction.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simp_turing_concept.yaml
...al/tasks/bigbench/generate_until/simp_turing_concept.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simple_arithmetic_json.yaml
...tasks/bigbench/generate_until/simple_arithmetic_json.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simple_arithmetic_json_multiple_choice.yaml
...enerate_until/simple_arithmetic_json_multiple_choice.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simple_arithmetic_json_subtasks.yaml
...bench/generate_until/simple_arithmetic_json_subtasks.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/polish_sequence_labeling.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
polish_sequence_labeling_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_polish_sequence_labeling_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/presuppositions_as_nli.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
presuppositions_as_nli_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_presuppositions_as_nli_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/qa_wikidata.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
qa_wikidata_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_qa_wikidata_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/question_selection.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
question_selection_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_question_selection_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/real_or_fake_text.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
real_or_fake_text_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_real_or_fake_text_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/reasoning_about_colored_objects.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
reasoning_about_colored_objects_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_reasoning_about_colored_objects_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/repeat_copy_logic.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
repeat_copy_logic_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_repeat_copy_logic_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/rephrase.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
rephrase_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_rephrase_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/riddle_sense.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
riddle_sense_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_riddle_sense_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/ruin_names.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
ruin_names_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_ruin_names_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/salient_translation_error_detection.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
salient_translation_error_detection_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_salient_translation_error_detection_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/scientific_press_release.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
scientific_press_release_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_scientific_press_release_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/semantic_parsing_in_context_sparc.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
semantic_parsing_in_context_sparc_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_semantic_parsing_in_context_sparc_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/semantic_parsing_spider.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
semantic_parsing_spider_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_semantic_parsing_spider_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/sentence_ambiguity.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
sentence_ambiguity_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_sentence_ambiguity_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/similarities_abstraction.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
similarities_abstraction_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_similarities_abstraction_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simp_turing_concept.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
simp_turing_concept_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_simp_turing_concept_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simple_arithmetic_json.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
simple_arithmetic_json_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_simple_arithmetic_json_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simple_arithmetic_json_multiple_choice.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
simple_arithmetic_json_multiple_choice_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_simple_arithmetic_json_multiple_choice_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/simple_arithmetic_json_subtasks.yaml
0 → 100644
View file @
5add46aa
# Generated by utils.py
dataset_name
:
simple_arithmetic_json_subtasks_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_simple_arithmetic_json_subtasks_generate_until
Prev
1
…
28
29
30
31
32
33
34
35
36
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment