Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
DeepSeekV2_pytorch
Commits
74df9bea
Commit
74df9bea
authored
Sep 02, 2024
by
zhaoying1
Browse files
added deepseekv2
parents
Pipeline
#1652
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/bbq_lite_json.yaml
.../lm_eval/tasks/bigbench/generate_until/bbq_lite_json.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/bridging_anaphora_resolution_barqa.yaml
...ch/generate_until/bridging_anaphora_resolution_barqa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/causal_judgment.yaml
...m_eval/tasks/bigbench/generate_until/causal_judgment.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cause_and_effect.yaml
..._eval/tasks/bigbench/generate_until/cause_and_effect.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/checkmate_in_one.yaml
..._eval/tasks/bigbench/generate_until/checkmate_in_one.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/chess_state_tracking.yaml
...l/tasks/bigbench/generate_until/chess_state_tracking.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/chinese_remainder_theorem.yaml
...ks/bigbench/generate_until/chinese_remainder_theorem.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cifar10_classification.yaml
...tasks/bigbench/generate_until/cifar10_classification.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/code_line_description.yaml
.../tasks/bigbench/generate_until/code_line_description.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/codenames.yaml
...0310/lm_eval/tasks/bigbench/generate_until/codenames.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/color.yaml
...s-240310/lm_eval/tasks/bigbench/generate_until/color.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/common_morpheme.yaml
...m_eval/tasks/bigbench/generate_until/common_morpheme.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conceptual_combinations.yaml
...asks/bigbench/generate_until/conceptual_combinations.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conlang_translation.yaml
...al/tasks/bigbench/generate_until/conlang_translation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/contextual_parametric_knowledge_conflicts.yaml
...rate_until/contextual_parametric_knowledge_conflicts.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml
.../lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml
...40310/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryobiology_spanish.yaml
...al/tasks/bigbench/generate_until/cryobiology_spanish.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryptonite.yaml
...310/lm_eval/tasks/bigbench/generate_until/cryptonite.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cs_algorithms.yaml
.../lm_eval/tasks/bigbench/generate_until/cs_algorithms.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/bbq_lite_json.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
bbq_lite_json_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_bbq_lite_json_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/bridging_anaphora_resolution_barqa.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
bridging_anaphora_resolution_barqa_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_bridging_anaphora_resolution_barqa_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/causal_judgment.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
causal_judgment_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_causal_judgment_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cause_and_effect.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
cause_and_effect_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cause_and_effect_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/checkmate_in_one.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
checkmate_in_one_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_checkmate_in_one_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/chess_state_tracking.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
chess_state_tracking_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_chess_state_tracking_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/chinese_remainder_theorem.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
chinese_remainder_theorem_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_chinese_remainder_theorem_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cifar10_classification.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
cifar10_classification_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cifar10_classification_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/code_line_description.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
code_line_description_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_code_line_description_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/codenames.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
codenames_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_codenames_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/color.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
color_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_color_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/common_morpheme.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
common_morpheme_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_common_morpheme_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conceptual_combinations.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
conceptual_combinations_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_conceptual_combinations_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/conlang_translation.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
conlang_translation_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_conlang_translation_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/contextual_parametric_knowledge_conflicts.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
contextual_parametric_knowledge_conflicts_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_contextual_parametric_knowledge_conflicts_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
crash_blossom_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_crash_blossom_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
crass_ai_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_crass_ai_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryobiology_spanish.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
cryobiology_spanish_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cryobiology_spanish_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cryptonite.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
cryptonite_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cryptonite_generate_until
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/generate_until/cs_algorithms.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
cs_algorithms_zero_shot
include
:
../generate_until_template_yaml
task
:
bigbench_cs_algorithms_generate_until
Prev
1
…
23
24
25
26
27
28
29
30
31
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment