Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
DeepSeekV2_pytorch
Commits
74df9bea
Commit
74df9bea
authored
Sep 02, 2024
by
zhaoying1
Browse files
added deepseekv2
parents
Pipeline
#1652
failed with stages
in 0 seconds
Changes
1000
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
80 additions
and
0 deletions
+80
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logic_grid_puzzle.yaml
...val/tasks/bigbench/multiple_choice/logic_grid_puzzle.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_args.yaml
.../lm_eval/tasks/bigbench/multiple_choice/logical_args.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_deduction.yaml
...val/tasks/bigbench/multiple_choice/logical_deduction.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_fallacy_detection.yaml
...s/bigbench/multiple_choice/logical_fallacy_detection.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_sequence.yaml
...eval/tasks/bigbench/multiple_choice/logical_sequence.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mathematical_induction.yaml
...asks/bigbench/multiple_choice/mathematical_induction.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/matrixshapes.yaml
.../lm_eval/tasks/bigbench/multiple_choice/matrixshapes.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/metaphor_boolean.yaml
...eval/tasks/bigbench/multiple_choice/metaphor_boolean.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/metaphor_understanding.yaml
...asks/bigbench/multiple_choice/metaphor_understanding.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/minute_mysteries_qa.yaml
...l/tasks/bigbench/multiple_choice/minute_mysteries_qa.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/misconceptions.yaml
...m_eval/tasks/bigbench/multiple_choice/misconceptions.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/misconceptions_russian.yaml
...asks/bigbench/multiple_choice/misconceptions_russian.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mnist_ascii.yaml
...0/lm_eval/tasks/bigbench/multiple_choice/mnist_ascii.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/modified_arithmetic.yaml
...l/tasks/bigbench/multiple_choice/modified_arithmetic.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/moral_permissibility.yaml
.../tasks/bigbench/multiple_choice/moral_permissibility.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/movie_dialog_same_or_different.yaml
...bench/multiple_choice/movie_dialog_same_or_different.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/movie_recommendation.yaml
.../tasks/bigbench/multiple_choice/movie_recommendation.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mult_data_wrangling.yaml
...l/tasks/bigbench/multiple_choice/mult_data_wrangling.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/multiemo.yaml
...0310/lm_eval/tasks/bigbench/multiple_choice/multiemo.yaml
+4
-0
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/natural_instructions.yaml
.../tasks/bigbench/multiple_choice/natural_instructions.yaml
+4
-0
No files found.
Too many changes to show.
To preserve performance only
1000 of 1000+
files are displayed.
Plain diff
Email patch
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logic_grid_puzzle.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
logic_grid_puzzle_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logic_grid_puzzle_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_args.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
logical_args_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_args_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_deduction.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
logical_deduction_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_deduction_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_fallacy_detection.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
logical_fallacy_detection_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_fallacy_detection_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/logical_sequence.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
logical_sequence_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_logical_sequence_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mathematical_induction.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
mathematical_induction_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_mathematical_induction_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/matrixshapes.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
matrixshapes_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_matrixshapes_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/metaphor_boolean.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
metaphor_boolean_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_metaphor_boolean_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/metaphor_understanding.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
metaphor_understanding_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_metaphor_understanding_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/minute_mysteries_qa.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
minute_mysteries_qa_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_minute_mysteries_qa_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/misconceptions.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
misconceptions_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_misconceptions_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/misconceptions_russian.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
misconceptions_russian_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_misconceptions_russian_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mnist_ascii.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
mnist_ascii_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_mnist_ascii_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/modified_arithmetic.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
modified_arithmetic_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_modified_arithmetic_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/moral_permissibility.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
moral_permissibility_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_moral_permissibility_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/movie_dialog_same_or_different.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
movie_dialog_same_or_different_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_movie_dialog_same_or_different_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/movie_recommendation.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
movie_recommendation_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_movie_recommendation_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/mult_data_wrangling.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
mult_data_wrangling_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_mult_data_wrangling_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/multiemo.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
multiemo_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_multiemo_multiple_choice
LM-Evaluation-Harness-240310/lm_eval/tasks/bigbench/multiple_choice/natural_instructions.yaml
0 → 100644
View file @
74df9bea
# Generated by utils.py
dataset_name
:
natural_instructions_zero_shot
include
:
../multiple_choice_template_yaml
task
:
bigbench_natural_instructions_multiple_choice
Prev
1
…
35
36
37
38
39
40
41
42
43
…
50
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment